2026-01-05 00:00:07.112304 | Job console starting 2026-01-05 00:00:07.130173 | Updating git repos 2026-01-05 00:00:07.641996 | Cloning repos into workspace 2026-01-05 00:00:08.013980 | Restoring repo states 2026-01-05 00:00:08.058247 | Merging changes 2026-01-05 00:00:08.058268 | Checking out repos 2026-01-05 00:00:08.712874 | Preparing playbooks 2026-01-05 00:00:10.371776 | Running Ansible setup 2026-01-05 00:00:20.376280 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-01-05 00:00:25.073090 | 2026-01-05 00:00:25.073277 | PLAY [Base pre] 2026-01-05 00:00:25.095689 | 2026-01-05 00:00:25.095845 | TASK [Setup log path fact] 2026-01-05 00:00:25.171480 | orchestrator | ok 2026-01-05 00:00:25.221344 | 2026-01-05 00:00:25.221566 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-01-05 00:00:25.373971 | orchestrator | ok 2026-01-05 00:00:25.401931 | 2026-01-05 00:00:25.402077 | TASK [emit-job-header : Print job information] 2026-01-05 00:00:25.467426 | # Job Information 2026-01-05 00:00:25.468717 | Ansible Version: 2.16.14 2026-01-05 00:00:25.468761 | Job: testbed-update-stable-current-ubuntu-24.04 2026-01-05 00:00:25.468796 | Pipeline: periodic-midnight 2026-01-05 00:00:25.468820 | Executor: 521e9411259a 2026-01-05 00:00:25.468841 | Triggered by: https://github.com/osism/testbed 2026-01-05 00:00:25.468863 | Event ID: 9a1e5e94553547229e870b2662f29864 2026-01-05 00:00:25.510452 | 2026-01-05 00:00:25.511829 | LOOP [emit-job-header : Print node information] 2026-01-05 00:00:25.818762 | orchestrator | ok: 2026-01-05 00:00:25.823145 | orchestrator | # Node Information 2026-01-05 00:00:25.823246 | orchestrator | Inventory Hostname: orchestrator 2026-01-05 00:00:25.823276 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-01-05 00:00:25.823301 | orchestrator | Username: zuul-testbed06 2026-01-05 00:00:25.823322 | orchestrator | Distro: Debian 12.12 2026-01-05 00:00:25.823346 | orchestrator | Provider: static-testbed 2026-01-05 00:00:25.823367 | orchestrator | Region: 2026-01-05 00:00:25.823407 | orchestrator | Label: testbed-orchestrator 2026-01-05 00:00:25.823428 | orchestrator | Product Name: OpenStack Nova 2026-01-05 00:00:25.823448 | orchestrator | Interface IP: 81.163.193.140 2026-01-05 00:00:25.860371 | 2026-01-05 00:00:25.861858 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-01-05 00:00:28.916066 | orchestrator -> localhost | changed 2026-01-05 00:00:28.924799 | 2026-01-05 00:00:28.924944 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-01-05 00:00:33.095275 | orchestrator -> localhost | changed 2026-01-05 00:00:33.124986 | 2026-01-05 00:00:33.125148 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-01-05 00:00:34.935941 | orchestrator -> localhost | ok 2026-01-05 00:00:34.941653 | 2026-01-05 00:00:34.941746 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-01-05 00:00:35.024173 | orchestrator | ok 2026-01-05 00:00:35.082787 | orchestrator | included: /var/lib/zuul/builds/a54607074d3f4bc6b3302cee85a7e89a/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-01-05 00:00:35.122196 | 2026-01-05 00:00:35.122295 | TASK [add-build-sshkey : Create Temp SSH key] 2026-01-05 00:00:41.397492 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-01-05 00:00:41.397714 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/a54607074d3f4bc6b3302cee85a7e89a/work/a54607074d3f4bc6b3302cee85a7e89a_id_rsa 2026-01-05 00:00:41.397755 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/a54607074d3f4bc6b3302cee85a7e89a/work/a54607074d3f4bc6b3302cee85a7e89a_id_rsa.pub 2026-01-05 00:00:41.397778 | orchestrator -> localhost | The key fingerprint is: 2026-01-05 00:00:41.397798 | orchestrator -> localhost | SHA256:8eKNo+no8q0r7GKw+JOHQO0opbwWXe9BvvziaHHGFic zuul-build-sshkey 2026-01-05 00:00:41.397816 | orchestrator -> localhost | The key's randomart image is: 2026-01-05 00:00:41.397844 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-01-05 00:00:41.397862 | orchestrator -> localhost | | | 2026-01-05 00:00:41.397880 | orchestrator -> localhost | | | 2026-01-05 00:00:41.397896 | orchestrator -> localhost | | . . | 2026-01-05 00:00:41.397912 | orchestrator -> localhost | | ... . E + | 2026-01-05 00:00:41.397929 | orchestrator -> localhost | |oo+ . = S . | 2026-01-05 00:00:41.397951 | orchestrator -> localhost | |=+ o . X + | 2026-01-05 00:00:41.397969 | orchestrator -> localhost | |++oo B * . | 2026-01-05 00:00:41.397987 | orchestrator -> localhost | |++B .+.B . | 2026-01-05 00:00:41.398005 | orchestrator -> localhost | |o+oBB=*.o. | 2026-01-05 00:00:41.398021 | orchestrator -> localhost | +----[SHA256]-----+ 2026-01-05 00:00:41.398070 | orchestrator -> localhost | ok: Runtime: 0:00:04.176004 2026-01-05 00:00:41.404143 | 2026-01-05 00:00:41.404229 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-01-05 00:00:41.443491 | orchestrator | ok 2026-01-05 00:00:41.460847 | orchestrator | included: /var/lib/zuul/builds/a54607074d3f4bc6b3302cee85a7e89a/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-01-05 00:00:41.472007 | 2026-01-05 00:00:41.472109 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-01-05 00:00:41.520860 | orchestrator | skipping: Conditional result was False 2026-01-05 00:00:41.542266 | 2026-01-05 00:00:41.542379 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-01-05 00:00:42.523544 | orchestrator | changed 2026-01-05 00:00:42.529273 | 2026-01-05 00:00:42.529363 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-01-05 00:00:42.853044 | orchestrator | ok 2026-01-05 00:00:42.860039 | 2026-01-05 00:00:42.860139 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-01-05 00:00:43.423096 | orchestrator | ok 2026-01-05 00:00:43.435480 | 2026-01-05 00:00:43.435580 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-01-05 00:00:43.988890 | orchestrator | ok 2026-01-05 00:00:43.993867 | 2026-01-05 00:00:43.997981 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-01-05 00:00:44.041113 | orchestrator | skipping: Conditional result was False 2026-01-05 00:00:44.047866 | 2026-01-05 00:00:44.047971 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-01-05 00:00:45.349354 | orchestrator -> localhost | changed 2026-01-05 00:00:45.362478 | 2026-01-05 00:00:45.362578 | TASK [add-build-sshkey : Add back temp key] 2026-01-05 00:00:46.791304 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/a54607074d3f4bc6b3302cee85a7e89a/work/a54607074d3f4bc6b3302cee85a7e89a_id_rsa (zuul-build-sshkey) 2026-01-05 00:00:46.791502 | orchestrator -> localhost | ok: Runtime: 0:00:00.042763 2026-01-05 00:00:46.802569 | 2026-01-05 00:00:46.802652 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-01-05 00:00:47.483771 | orchestrator | ok 2026-01-05 00:00:47.528401 | 2026-01-05 00:00:47.528512 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-01-05 00:00:47.635741 | orchestrator | skipping: Conditional result was False 2026-01-05 00:00:47.862180 | 2026-01-05 00:00:47.866135 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-01-05 00:00:48.573585 | orchestrator | ok 2026-01-05 00:00:48.605601 | 2026-01-05 00:00:48.605706 | TASK [validate-host : Define zuul_info_dir fact] 2026-01-05 00:00:48.662221 | orchestrator | ok 2026-01-05 00:00:48.668963 | 2026-01-05 00:00:48.669061 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-01-05 00:00:50.093889 | orchestrator -> localhost | ok 2026-01-05 00:00:50.100093 | 2026-01-05 00:00:50.100180 | TASK [validate-host : Collect information about the host] 2026-01-05 00:00:52.298074 | orchestrator | ok 2026-01-05 00:00:52.344230 | 2026-01-05 00:00:52.344339 | TASK [validate-host : Sanitize hostname] 2026-01-05 00:00:52.499687 | orchestrator | ok 2026-01-05 00:00:52.506895 | 2026-01-05 00:00:52.507003 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-01-05 00:00:54.955842 | orchestrator -> localhost | changed 2026-01-05 00:00:54.960837 | 2026-01-05 00:00:54.960915 | TASK [validate-host : Collect information about zuul worker] 2026-01-05 00:00:55.636761 | orchestrator | ok 2026-01-05 00:00:55.641582 | 2026-01-05 00:00:55.641672 | TASK [validate-host : Write out all zuul information for each host] 2026-01-05 00:00:58.165513 | orchestrator -> localhost | changed 2026-01-05 00:00:58.176067 | 2026-01-05 00:00:58.176170 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-01-05 00:00:58.522338 | orchestrator | ok 2026-01-05 00:00:58.534940 | 2026-01-05 00:00:58.535946 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-01-05 00:02:16.772923 | orchestrator | changed: 2026-01-05 00:02:16.773159 | orchestrator | .d..t...... src/ 2026-01-05 00:02:16.773195 | orchestrator | .d..t...... src/github.com/ 2026-01-05 00:02:16.773220 | orchestrator | .d..t...... src/github.com/osism/ 2026-01-05 00:02:16.773242 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-01-05 00:02:16.773263 | orchestrator | RedHat.yml 2026-01-05 00:02:16.788124 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-01-05 00:02:16.788141 | orchestrator | RedHat.yml 2026-01-05 00:02:16.788193 | orchestrator | = 2.2.0"... 2026-01-05 00:02:28.288985 | orchestrator | - Finding latest version of hashicorp/null... 2026-01-05 00:02:28.308845 | orchestrator | - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2026-01-05 00:02:28.444603 | orchestrator | - Installing hashicorp/local v2.6.1... 2026-01-05 00:02:29.050318 | orchestrator | - Installed hashicorp/local v2.6.1 (signed, key ID 0C0AF313E5FD9F80) 2026-01-05 00:02:29.107688 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-01-05 00:02:29.594527 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-01-05 00:02:29.657121 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-01-05 00:02:30.317130 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-01-05 00:02:30.317231 | orchestrator | 2026-01-05 00:02:30.317238 | orchestrator | Providers are signed by their developers. 2026-01-05 00:02:30.317244 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-01-05 00:02:30.317256 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-01-05 00:02:30.317310 | orchestrator | 2026-01-05 00:02:30.317316 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-01-05 00:02:30.317329 | orchestrator | selections it made above. Include this file in your version control repository 2026-01-05 00:02:30.317334 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-01-05 00:02:30.317344 | orchestrator | you run "tofu init" in the future. 2026-01-05 00:02:30.317893 | orchestrator | 2026-01-05 00:02:30.317960 | orchestrator | OpenTofu has been successfully initialized! 2026-01-05 00:02:30.317987 | orchestrator | 2026-01-05 00:02:30.317992 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-01-05 00:02:30.318006 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-01-05 00:02:30.318011 | orchestrator | should now work. 2026-01-05 00:02:30.318031 | orchestrator | 2026-01-05 00:02:30.318035 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-01-05 00:02:30.318040 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-01-05 00:02:30.318058 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-01-05 00:02:30.538097 | orchestrator | Created and switched to workspace "ci"! 2026-01-05 00:02:30.538158 | orchestrator | 2026-01-05 00:02:30.538164 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-01-05 00:02:30.538170 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-01-05 00:02:30.538177 | orchestrator | for this configuration. 2026-01-05 00:02:30.724521 | orchestrator | ci.auto.tfvars 2026-01-05 00:02:30.808434 | orchestrator | default_custom.tf 2026-01-05 00:02:32.262103 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-01-05 00:02:32.825322 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-01-05 00:02:36.647361 | orchestrator | 2026-01-05 00:02:36.647443 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-01-05 00:02:36.647450 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-01-05 00:02:36.647476 | orchestrator | + create 2026-01-05 00:02:36.647495 | orchestrator | <= read (data resources) 2026-01-05 00:02:36.647510 | orchestrator | 2026-01-05 00:02:36.647515 | orchestrator | OpenTofu will perform the following actions: 2026-01-05 00:02:36.647638 | orchestrator | 2026-01-05 00:02:36.647656 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-01-05 00:02:36.647662 | orchestrator | # (config refers to values not yet known) 2026-01-05 00:02:36.647666 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-01-05 00:02:36.647670 | orchestrator | + checksum = (known after apply) 2026-01-05 00:02:36.647674 | orchestrator | + created_at = (known after apply) 2026-01-05 00:02:36.647678 | orchestrator | + file = (known after apply) 2026-01-05 00:02:36.647682 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.647704 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:36.647709 | orchestrator | + min_disk_gb = (known after apply) 2026-01-05 00:02:36.647713 | orchestrator | + min_ram_mb = (known after apply) 2026-01-05 00:02:36.647717 | orchestrator | + most_recent = true 2026-01-05 00:02:36.647721 | orchestrator | + name = (known after apply) 2026-01-05 00:02:36.647725 | orchestrator | + protected = (known after apply) 2026-01-05 00:02:36.647729 | orchestrator | + region = (known after apply) 2026-01-05 00:02:36.647736 | orchestrator | + schema = (known after apply) 2026-01-05 00:02:36.647740 | orchestrator | + size_bytes = (known after apply) 2026-01-05 00:02:36.647744 | orchestrator | + tags = (known after apply) 2026-01-05 00:02:36.647748 | orchestrator | + updated_at = (known after apply) 2026-01-05 00:02:36.647751 | orchestrator | } 2026-01-05 00:02:36.647845 | orchestrator | 2026-01-05 00:02:36.647939 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-01-05 00:02:36.647944 | orchestrator | # (config refers to values not yet known) 2026-01-05 00:02:36.647948 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-01-05 00:02:36.647952 | orchestrator | + checksum = (known after apply) 2026-01-05 00:02:36.647956 | orchestrator | + created_at = (known after apply) 2026-01-05 00:02:36.647960 | orchestrator | + file = (known after apply) 2026-01-05 00:02:36.647964 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.647968 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:36.647972 | orchestrator | + min_disk_gb = (known after apply) 2026-01-05 00:02:36.647976 | orchestrator | + min_ram_mb = (known after apply) 2026-01-05 00:02:36.647980 | orchestrator | + most_recent = true 2026-01-05 00:02:36.647984 | orchestrator | + name = (known after apply) 2026-01-05 00:02:36.647987 | orchestrator | + protected = (known after apply) 2026-01-05 00:02:36.647991 | orchestrator | + region = (known after apply) 2026-01-05 00:02:36.647995 | orchestrator | + schema = (known after apply) 2026-01-05 00:02:36.647999 | orchestrator | + size_bytes = (known after apply) 2026-01-05 00:02:36.648002 | orchestrator | + tags = (known after apply) 2026-01-05 00:02:36.648006 | orchestrator | + updated_at = (known after apply) 2026-01-05 00:02:36.648010 | orchestrator | } 2026-01-05 00:02:36.648099 | orchestrator | 2026-01-05 00:02:36.648112 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-01-05 00:02:36.648117 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-01-05 00:02:36.648122 | orchestrator | + content = (known after apply) 2026-01-05 00:02:36.648126 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-05 00:02:36.648130 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-05 00:02:36.648134 | orchestrator | + content_md5 = (known after apply) 2026-01-05 00:02:36.648137 | orchestrator | + content_sha1 = (known after apply) 2026-01-05 00:02:36.648141 | orchestrator | + content_sha256 = (known after apply) 2026-01-05 00:02:36.648145 | orchestrator | + content_sha512 = (known after apply) 2026-01-05 00:02:36.648149 | orchestrator | + directory_permission = "0777" 2026-01-05 00:02:36.648154 | orchestrator | + file_permission = "0644" 2026-01-05 00:02:36.648157 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-01-05 00:02:36.648161 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.648165 | orchestrator | } 2026-01-05 00:02:36.648233 | orchestrator | 2026-01-05 00:02:36.648245 | orchestrator | # local_file.id_rsa_pub will be created 2026-01-05 00:02:36.648249 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-01-05 00:02:36.648253 | orchestrator | + content = (known after apply) 2026-01-05 00:02:36.648257 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-05 00:02:36.648261 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-05 00:02:36.648265 | orchestrator | + content_md5 = (known after apply) 2026-01-05 00:02:36.648269 | orchestrator | + content_sha1 = (known after apply) 2026-01-05 00:02:36.648272 | orchestrator | + content_sha256 = (known after apply) 2026-01-05 00:02:36.648283 | orchestrator | + content_sha512 = (known after apply) 2026-01-05 00:02:36.648287 | orchestrator | + directory_permission = "0777" 2026-01-05 00:02:36.648291 | orchestrator | + file_permission = "0644" 2026-01-05 00:02:36.648301 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-01-05 00:02:36.648305 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.648309 | orchestrator | } 2026-01-05 00:02:36.648375 | orchestrator | 2026-01-05 00:02:36.648386 | orchestrator | # local_file.inventory will be created 2026-01-05 00:02:36.648391 | orchestrator | + resource "local_file" "inventory" { 2026-01-05 00:02:36.648395 | orchestrator | + content = (known after apply) 2026-01-05 00:02:36.648399 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-05 00:02:36.648403 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-05 00:02:36.648407 | orchestrator | + content_md5 = (known after apply) 2026-01-05 00:02:36.648411 | orchestrator | + content_sha1 = (known after apply) 2026-01-05 00:02:36.648415 | orchestrator | + content_sha256 = (known after apply) 2026-01-05 00:02:36.648419 | orchestrator | + content_sha512 = (known after apply) 2026-01-05 00:02:36.648423 | orchestrator | + directory_permission = "0777" 2026-01-05 00:02:36.648427 | orchestrator | + file_permission = "0644" 2026-01-05 00:02:36.648430 | orchestrator | + filename = "inventory.ci" 2026-01-05 00:02:36.648434 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.648438 | orchestrator | } 2026-01-05 00:02:36.648507 | orchestrator | 2026-01-05 00:02:36.648518 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-01-05 00:02:36.648523 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-01-05 00:02:36.648527 | orchestrator | + content = (sensitive value) 2026-01-05 00:02:36.648531 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-05 00:02:36.648535 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-05 00:02:36.648538 | orchestrator | + content_md5 = (known after apply) 2026-01-05 00:02:36.648542 | orchestrator | + content_sha1 = (known after apply) 2026-01-05 00:02:36.648546 | orchestrator | + content_sha256 = (known after apply) 2026-01-05 00:02:36.648550 | orchestrator | + content_sha512 = (known after apply) 2026-01-05 00:02:36.648554 | orchestrator | + directory_permission = "0700" 2026-01-05 00:02:36.648558 | orchestrator | + file_permission = "0600" 2026-01-05 00:02:36.648562 | orchestrator | + filename = ".id_rsa.ci" 2026-01-05 00:02:36.648566 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.648569 | orchestrator | } 2026-01-05 00:02:36.648589 | orchestrator | 2026-01-05 00:02:36.648600 | orchestrator | # null_resource.node_semaphore will be created 2026-01-05 00:02:36.648604 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-01-05 00:02:36.648608 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.648612 | orchestrator | } 2026-01-05 00:02:36.648673 | orchestrator | 2026-01-05 00:02:36.648684 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-01-05 00:02:36.648688 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-01-05 00:02:36.648692 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:36.648696 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:36.648700 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.648704 | orchestrator | + image_id = (known after apply) 2026-01-05 00:02:36.648707 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:36.648711 | orchestrator | + name = "testbed-volume-manager-base" 2026-01-05 00:02:36.648715 | orchestrator | + region = (known after apply) 2026-01-05 00:02:36.648719 | orchestrator | + size = 80 2026-01-05 00:02:36.648723 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:36.648727 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:36.648731 | orchestrator | } 2026-01-05 00:02:36.648793 | orchestrator | 2026-01-05 00:02:36.648804 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-01-05 00:02:36.648809 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-05 00:02:36.648813 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:36.648816 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:36.648820 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.648829 | orchestrator | + image_id = (known after apply) 2026-01-05 00:02:36.648832 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:36.648836 | orchestrator | + name = "testbed-volume-0-node-base" 2026-01-05 00:02:36.648840 | orchestrator | + region = (known after apply) 2026-01-05 00:02:36.648844 | orchestrator | + size = 80 2026-01-05 00:02:36.648858 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:36.648863 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:36.648866 | orchestrator | } 2026-01-05 00:02:36.648927 | orchestrator | 2026-01-05 00:02:36.648939 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-01-05 00:02:36.648943 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-05 00:02:36.648947 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:36.648951 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:36.648955 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.648959 | orchestrator | + image_id = (known after apply) 2026-01-05 00:02:36.648963 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:36.648967 | orchestrator | + name = "testbed-volume-1-node-base" 2026-01-05 00:02:36.648970 | orchestrator | + region = (known after apply) 2026-01-05 00:02:36.648974 | orchestrator | + size = 80 2026-01-05 00:02:36.648978 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:36.648982 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:36.648986 | orchestrator | } 2026-01-05 00:02:36.649041 | orchestrator | 2026-01-05 00:02:36.649052 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-01-05 00:02:36.649056 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-05 00:02:36.649060 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:36.649064 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:36.649068 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.649071 | orchestrator | + image_id = (known after apply) 2026-01-05 00:02:36.649075 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:36.649079 | orchestrator | + name = "testbed-volume-2-node-base" 2026-01-05 00:02:36.649083 | orchestrator | + region = (known after apply) 2026-01-05 00:02:36.649087 | orchestrator | + size = 80 2026-01-05 00:02:36.649093 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:36.649097 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:36.649101 | orchestrator | } 2026-01-05 00:02:36.649162 | orchestrator | 2026-01-05 00:02:36.649174 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-01-05 00:02:36.649178 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-05 00:02:36.649182 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:36.649186 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:36.649190 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.649193 | orchestrator | + image_id = (known after apply) 2026-01-05 00:02:36.649197 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:36.649201 | orchestrator | + name = "testbed-volume-3-node-base" 2026-01-05 00:02:36.649205 | orchestrator | + region = (known after apply) 2026-01-05 00:02:36.649209 | orchestrator | + size = 80 2026-01-05 00:02:36.649213 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:36.649217 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:36.649220 | orchestrator | } 2026-01-05 00:02:36.649275 | orchestrator | 2026-01-05 00:02:36.649286 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-01-05 00:02:36.649290 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-05 00:02:36.649294 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:36.649298 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:36.649302 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.649311 | orchestrator | + image_id = (known after apply) 2026-01-05 00:02:36.649315 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:36.649319 | orchestrator | + name = "testbed-volume-4-node-base" 2026-01-05 00:02:36.649323 | orchestrator | + region = (known after apply) 2026-01-05 00:02:36.649326 | orchestrator | + size = 80 2026-01-05 00:02:36.649330 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:36.649334 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:36.649338 | orchestrator | } 2026-01-05 00:02:36.649394 | orchestrator | 2026-01-05 00:02:36.649405 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-01-05 00:02:36.649410 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-05 00:02:36.649413 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:36.649417 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:36.649421 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.649425 | orchestrator | + image_id = (known after apply) 2026-01-05 00:02:36.649429 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:36.649433 | orchestrator | + name = "testbed-volume-5-node-base" 2026-01-05 00:02:36.649436 | orchestrator | + region = (known after apply) 2026-01-05 00:02:36.649440 | orchestrator | + size = 80 2026-01-05 00:02:36.649444 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:36.649448 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:36.649452 | orchestrator | } 2026-01-05 00:02:36.649508 | orchestrator | 2026-01-05 00:02:36.649519 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-01-05 00:02:36.649524 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-05 00:02:36.649528 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:36.649532 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:36.649536 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.649539 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:36.649543 | orchestrator | + name = "testbed-volume-0-node-3" 2026-01-05 00:02:36.649547 | orchestrator | + region = (known after apply) 2026-01-05 00:02:36.649551 | orchestrator | + size = 20 2026-01-05 00:02:36.649555 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:36.649559 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:36.649563 | orchestrator | } 2026-01-05 00:02:36.649617 | orchestrator | 2026-01-05 00:02:36.649628 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-01-05 00:02:36.649632 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-05 00:02:36.649636 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:36.649640 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:36.649644 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.649648 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:36.649652 | orchestrator | + name = "testbed-volume-1-node-4" 2026-01-05 00:02:36.649655 | orchestrator | + region = (known after apply) 2026-01-05 00:02:36.649659 | orchestrator | + size = 20 2026-01-05 00:02:36.649663 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:36.649667 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:36.649671 | orchestrator | } 2026-01-05 00:02:36.649724 | orchestrator | 2026-01-05 00:02:36.649735 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-01-05 00:02:36.649739 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-05 00:02:36.649743 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:36.649747 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:36.649751 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.649755 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:36.649759 | orchestrator | + name = "testbed-volume-2-node-5" 2026-01-05 00:02:36.649763 | orchestrator | + region = (known after apply) 2026-01-05 00:02:36.649771 | orchestrator | + size = 20 2026-01-05 00:02:36.649775 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:36.649779 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:36.649783 | orchestrator | } 2026-01-05 00:02:36.649835 | orchestrator | 2026-01-05 00:02:36.649846 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-01-05 00:02:36.649876 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-05 00:02:36.649880 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:36.649884 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:36.649888 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.649895 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:36.649899 | orchestrator | + name = "testbed-volume-3-node-3" 2026-01-05 00:02:36.649903 | orchestrator | + region = (known after apply) 2026-01-05 00:02:36.649906 | orchestrator | + size = 20 2026-01-05 00:02:36.649910 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:36.649914 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:36.649918 | orchestrator | } 2026-01-05 00:02:36.649975 | orchestrator | 2026-01-05 00:02:36.649986 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-01-05 00:02:36.649991 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-05 00:02:36.649995 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:36.649998 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:36.650002 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.650006 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:36.650010 | orchestrator | + name = "testbed-volume-4-node-4" 2026-01-05 00:02:36.650037 | orchestrator | + region = (known after apply) 2026-01-05 00:02:36.650041 | orchestrator | + size = 20 2026-01-05 00:02:36.650045 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:36.650048 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:36.650052 | orchestrator | } 2026-01-05 00:02:36.650113 | orchestrator | 2026-01-05 00:02:36.650124 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-01-05 00:02:36.650129 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-05 00:02:36.650133 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:36.650136 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:36.650140 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.650144 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:36.650148 | orchestrator | + name = "testbed-volume-5-node-5" 2026-01-05 00:02:36.650152 | orchestrator | + region = (known after apply) 2026-01-05 00:02:36.650155 | orchestrator | + size = 20 2026-01-05 00:02:36.650159 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:36.650163 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:36.650167 | orchestrator | } 2026-01-05 00:02:36.650220 | orchestrator | 2026-01-05 00:02:36.650232 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-01-05 00:02:36.650236 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-05 00:02:36.650240 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:36.650244 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:36.650248 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.650252 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:36.650255 | orchestrator | + name = "testbed-volume-6-node-3" 2026-01-05 00:02:36.650259 | orchestrator | + region = (known after apply) 2026-01-05 00:02:36.650263 | orchestrator | + size = 20 2026-01-05 00:02:36.650267 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:36.650271 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:36.650274 | orchestrator | } 2026-01-05 00:02:36.650334 | orchestrator | 2026-01-05 00:02:36.650345 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-01-05 00:02:36.650350 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-05 00:02:36.650359 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:36.650362 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:36.650366 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.650370 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:36.650374 | orchestrator | + name = "testbed-volume-7-node-4" 2026-01-05 00:02:36.650378 | orchestrator | + region = (known after apply) 2026-01-05 00:02:36.650382 | orchestrator | + size = 20 2026-01-05 00:02:36.650386 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:36.650390 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:36.650394 | orchestrator | } 2026-01-05 00:02:36.650448 | orchestrator | 2026-01-05 00:02:36.650459 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-01-05 00:02:36.650464 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-05 00:02:36.650467 | orchestrator | + attachment = (known after apply) 2026-01-05 00:02:36.650471 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:36.650475 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.650479 | orchestrator | + metadata = (known after apply) 2026-01-05 00:02:36.650483 | orchestrator | + name = "testbed-volume-8-node-5" 2026-01-05 00:02:36.650487 | orchestrator | + region = (known after apply) 2026-01-05 00:02:36.650491 | orchestrator | + size = 20 2026-01-05 00:02:36.650494 | orchestrator | + volume_retype_policy = "never" 2026-01-05 00:02:36.650498 | orchestrator | + volume_type = "ssd" 2026-01-05 00:02:36.650502 | orchestrator | } 2026-01-05 00:02:36.650683 | orchestrator | 2026-01-05 00:02:36.650695 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-01-05 00:02:36.650699 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-01-05 00:02:36.650703 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-05 00:02:36.650707 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-05 00:02:36.650711 | orchestrator | + all_metadata = (known after apply) 2026-01-05 00:02:36.650715 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:36.650718 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:36.650722 | orchestrator | + config_drive = true 2026-01-05 00:02:36.650732 | orchestrator | + created = (known after apply) 2026-01-05 00:02:36.650736 | orchestrator | + flavor_id = (known after apply) 2026-01-05 00:02:36.650740 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-01-05 00:02:36.650744 | orchestrator | + force_delete = false 2026-01-05 00:02:36.650747 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-05 00:02:36.650751 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.650755 | orchestrator | + image_id = (known after apply) 2026-01-05 00:02:36.650759 | orchestrator | + image_name = (known after apply) 2026-01-05 00:02:36.650763 | orchestrator | + key_pair = "testbed" 2026-01-05 00:02:36.650766 | orchestrator | + name = "testbed-manager" 2026-01-05 00:02:36.650770 | orchestrator | + power_state = "active" 2026-01-05 00:02:36.650774 | orchestrator | + region = (known after apply) 2026-01-05 00:02:36.650778 | orchestrator | + security_groups = (known after apply) 2026-01-05 00:02:36.650782 | orchestrator | + stop_before_destroy = false 2026-01-05 00:02:36.650786 | orchestrator | + updated = (known after apply) 2026-01-05 00:02:36.650789 | orchestrator | + user_data = (sensitive value) 2026-01-05 00:02:36.650793 | orchestrator | 2026-01-05 00:02:36.650797 | orchestrator | + block_device { 2026-01-05 00:02:36.650801 | orchestrator | + boot_index = 0 2026-01-05 00:02:36.650805 | orchestrator | + delete_on_termination = false 2026-01-05 00:02:36.650809 | orchestrator | + destination_type = "volume" 2026-01-05 00:02:36.650812 | orchestrator | + multiattach = false 2026-01-05 00:02:36.650816 | orchestrator | + source_type = "volume" 2026-01-05 00:02:36.650820 | orchestrator | + uuid = (known after apply) 2026-01-05 00:02:36.650828 | orchestrator | } 2026-01-05 00:02:36.650832 | orchestrator | 2026-01-05 00:02:36.650836 | orchestrator | + network { 2026-01-05 00:02:36.650840 | orchestrator | + access_network = false 2026-01-05 00:02:36.650843 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-05 00:02:36.650847 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-05 00:02:36.650864 | orchestrator | + mac = (known after apply) 2026-01-05 00:02:36.650868 | orchestrator | + name = (known after apply) 2026-01-05 00:02:36.650872 | orchestrator | + port = (known after apply) 2026-01-05 00:02:36.650875 | orchestrator | + uuid = (known after apply) 2026-01-05 00:02:36.650879 | orchestrator | } 2026-01-05 00:02:36.650883 | orchestrator | } 2026-01-05 00:02:36.651063 | orchestrator | 2026-01-05 00:02:36.651075 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-01-05 00:02:36.651080 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-05 00:02:36.651084 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-05 00:02:36.651088 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-05 00:02:36.651091 | orchestrator | + all_metadata = (known after apply) 2026-01-05 00:02:36.651095 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:36.651099 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:36.651103 | orchestrator | + config_drive = true 2026-01-05 00:02:36.651107 | orchestrator | + created = (known after apply) 2026-01-05 00:02:36.651110 | orchestrator | + flavor_id = (known after apply) 2026-01-05 00:02:36.651114 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-05 00:02:36.651118 | orchestrator | + force_delete = false 2026-01-05 00:02:36.651122 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-05 00:02:36.651126 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.651130 | orchestrator | + image_id = (known after apply) 2026-01-05 00:02:36.651134 | orchestrator | + image_name = (known after apply) 2026-01-05 00:02:36.651137 | orchestrator | + key_pair = "testbed" 2026-01-05 00:02:36.651141 | orchestrator | + name = "testbed-node-0" 2026-01-05 00:02:36.651145 | orchestrator | + power_state = "active" 2026-01-05 00:02:36.651149 | orchestrator | + region = (known after apply) 2026-01-05 00:02:36.651153 | orchestrator | + security_groups = (known after apply) 2026-01-05 00:02:36.651156 | orchestrator | + stop_before_destroy = false 2026-01-05 00:02:36.651160 | orchestrator | + updated = (known after apply) 2026-01-05 00:02:36.651164 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-05 00:02:36.651168 | orchestrator | 2026-01-05 00:02:36.651172 | orchestrator | + block_device { 2026-01-05 00:02:36.651176 | orchestrator | + boot_index = 0 2026-01-05 00:02:36.651179 | orchestrator | + delete_on_termination = false 2026-01-05 00:02:36.651183 | orchestrator | + destination_type = "volume" 2026-01-05 00:02:36.651187 | orchestrator | + multiattach = false 2026-01-05 00:02:36.651191 | orchestrator | + source_type = "volume" 2026-01-05 00:02:36.651195 | orchestrator | + uuid = (known after apply) 2026-01-05 00:02:36.651199 | orchestrator | } 2026-01-05 00:02:36.651202 | orchestrator | 2026-01-05 00:02:36.651206 | orchestrator | + network { 2026-01-05 00:02:36.651210 | orchestrator | + access_network = false 2026-01-05 00:02:36.651214 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-05 00:02:36.651218 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-05 00:02:36.651221 | orchestrator | + mac = (known after apply) 2026-01-05 00:02:36.651225 | orchestrator | + name = (known after apply) 2026-01-05 00:02:36.651229 | orchestrator | + port = (known after apply) 2026-01-05 00:02:36.651233 | orchestrator | + uuid = (known after apply) 2026-01-05 00:02:36.651237 | orchestrator | } 2026-01-05 00:02:36.651240 | orchestrator | } 2026-01-05 00:02:36.651409 | orchestrator | 2026-01-05 00:02:36.651421 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-01-05 00:02:36.651426 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-05 00:02:36.651430 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-05 00:02:36.651437 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-05 00:02:36.651441 | orchestrator | + all_metadata = (known after apply) 2026-01-05 00:02:36.651445 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:36.651449 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:36.651452 | orchestrator | + config_drive = true 2026-01-05 00:02:36.651456 | orchestrator | + created = (known after apply) 2026-01-05 00:02:36.651460 | orchestrator | + flavor_id = (known after apply) 2026-01-05 00:02:36.651464 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-05 00:02:36.651468 | orchestrator | + force_delete = false 2026-01-05 00:02:36.651471 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-05 00:02:36.651475 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.651479 | orchestrator | + image_id = (known after apply) 2026-01-05 00:02:36.651483 | orchestrator | + image_name = (known after apply) 2026-01-05 00:02:36.651486 | orchestrator | + key_pair = "testbed" 2026-01-05 00:02:36.651490 | orchestrator | + name = "testbed-node-1" 2026-01-05 00:02:36.651494 | orchestrator | + power_state = "active" 2026-01-05 00:02:36.651498 | orchestrator | + region = (known after apply) 2026-01-05 00:02:36.651502 | orchestrator | + security_groups = (known after apply) 2026-01-05 00:02:36.651506 | orchestrator | + stop_before_destroy = false 2026-01-05 00:02:36.651509 | orchestrator | + updated = (known after apply) 2026-01-05 00:02:36.651516 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-05 00:02:36.651520 | orchestrator | 2026-01-05 00:02:36.651524 | orchestrator | + block_device { 2026-01-05 00:02:36.651527 | orchestrator | + boot_index = 0 2026-01-05 00:02:36.651531 | orchestrator | + delete_on_termination = false 2026-01-05 00:02:36.651535 | orchestrator | + destination_type = "volume" 2026-01-05 00:02:36.651539 | orchestrator | + multiattach = false 2026-01-05 00:02:36.651543 | orchestrator | + source_type = "volume" 2026-01-05 00:02:36.651546 | orchestrator | + uuid = (known after apply) 2026-01-05 00:02:36.651550 | orchestrator | } 2026-01-05 00:02:36.651554 | orchestrator | 2026-01-05 00:02:36.651558 | orchestrator | + network { 2026-01-05 00:02:36.651562 | orchestrator | + access_network = false 2026-01-05 00:02:36.651565 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-05 00:02:36.651569 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-05 00:02:36.651573 | orchestrator | + mac = (known after apply) 2026-01-05 00:02:36.651577 | orchestrator | + name = (known after apply) 2026-01-05 00:02:36.651581 | orchestrator | + port = (known after apply) 2026-01-05 00:02:36.651584 | orchestrator | + uuid = (known after apply) 2026-01-05 00:02:36.651588 | orchestrator | } 2026-01-05 00:02:36.651592 | orchestrator | } 2026-01-05 00:02:36.651755 | orchestrator | 2026-01-05 00:02:36.651767 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-01-05 00:02:36.651771 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-05 00:02:36.651775 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-05 00:02:36.651779 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-05 00:02:36.651783 | orchestrator | + all_metadata = (known after apply) 2026-01-05 00:02:36.651787 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:36.651791 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:36.651795 | orchestrator | + config_drive = true 2026-01-05 00:02:36.651799 | orchestrator | + created = (known after apply) 2026-01-05 00:02:36.651803 | orchestrator | + flavor_id = (known after apply) 2026-01-05 00:02:36.651806 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-05 00:02:36.651810 | orchestrator | + force_delete = false 2026-01-05 00:02:36.651814 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-05 00:02:36.651818 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.651822 | orchestrator | + image_id = (known after apply) 2026-01-05 00:02:36.651830 | orchestrator | + image_name = (known after apply) 2026-01-05 00:02:36.651834 | orchestrator | + key_pair = "testbed" 2026-01-05 00:02:36.651837 | orchestrator | + name = "testbed-node-2" 2026-01-05 00:02:36.651841 | orchestrator | + power_state = "active" 2026-01-05 00:02:36.651845 | orchestrator | + region = (known after apply) 2026-01-05 00:02:36.651860 | orchestrator | + security_groups = (known after apply) 2026-01-05 00:02:36.651864 | orchestrator | + stop_before_destroy = false 2026-01-05 00:02:36.651868 | orchestrator | + updated = (known after apply) 2026-01-05 00:02:36.651872 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-05 00:02:36.651876 | orchestrator | 2026-01-05 00:02:36.651879 | orchestrator | + block_device { 2026-01-05 00:02:36.651883 | orchestrator | + boot_index = 0 2026-01-05 00:02:36.651887 | orchestrator | + delete_on_termination = false 2026-01-05 00:02:36.651891 | orchestrator | + destination_type = "volume" 2026-01-05 00:02:36.651894 | orchestrator | + multiattach = false 2026-01-05 00:02:36.651898 | orchestrator | + source_type = "volume" 2026-01-05 00:02:36.651902 | orchestrator | + uuid = (known after apply) 2026-01-05 00:02:36.651906 | orchestrator | } 2026-01-05 00:02:36.651910 | orchestrator | 2026-01-05 00:02:36.651913 | orchestrator | + network { 2026-01-05 00:02:36.651917 | orchestrator | + access_network = false 2026-01-05 00:02:36.651921 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-05 00:02:36.651925 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-05 00:02:36.651928 | orchestrator | + mac = (known after apply) 2026-01-05 00:02:36.651932 | orchestrator | + name = (known after apply) 2026-01-05 00:02:36.651936 | orchestrator | + port = (known after apply) 2026-01-05 00:02:36.651940 | orchestrator | + uuid = (known after apply) 2026-01-05 00:02:36.651943 | orchestrator | } 2026-01-05 00:02:36.651947 | orchestrator | } 2026-01-05 00:02:36.652123 | orchestrator | 2026-01-05 00:02:36.652138 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-01-05 00:02:36.652142 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-05 00:02:36.652146 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-05 00:02:36.652150 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-05 00:02:36.652154 | orchestrator | + all_metadata = (known after apply) 2026-01-05 00:02:36.652158 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:36.652162 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:36.652165 | orchestrator | + config_drive = true 2026-01-05 00:02:36.652169 | orchestrator | + created = (known after apply) 2026-01-05 00:02:36.652173 | orchestrator | + flavor_id = (known after apply) 2026-01-05 00:02:36.652177 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-05 00:02:36.652181 | orchestrator | + force_delete = false 2026-01-05 00:02:36.652184 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-05 00:02:36.652188 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.652192 | orchestrator | + image_id = (known after apply) 2026-01-05 00:02:36.652196 | orchestrator | + image_name = (known after apply) 2026-01-05 00:02:36.652200 | orchestrator | + key_pair = "testbed" 2026-01-05 00:02:36.652204 | orchestrator | + name = "testbed-node-3" 2026-01-05 00:02:36.652207 | orchestrator | + power_state = "active" 2026-01-05 00:02:36.652211 | orchestrator | + region = (known after apply) 2026-01-05 00:02:36.652215 | orchestrator | + security_groups = (known after apply) 2026-01-05 00:02:36.652219 | orchestrator | + stop_before_destroy = false 2026-01-05 00:02:36.652222 | orchestrator | + updated = (known after apply) 2026-01-05 00:02:36.652226 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-05 00:02:36.652230 | orchestrator | 2026-01-05 00:02:36.652234 | orchestrator | + block_device { 2026-01-05 00:02:36.652238 | orchestrator | + boot_index = 0 2026-01-05 00:02:36.652242 | orchestrator | + delete_on_termination = false 2026-01-05 00:02:36.652245 | orchestrator | + destination_type = "volume" 2026-01-05 00:02:36.652253 | orchestrator | + multiattach = false 2026-01-05 00:02:36.652257 | orchestrator | + source_type = "volume" 2026-01-05 00:02:36.652260 | orchestrator | + uuid = (known after apply) 2026-01-05 00:02:36.652264 | orchestrator | } 2026-01-05 00:02:36.652268 | orchestrator | 2026-01-05 00:02:36.652272 | orchestrator | + network { 2026-01-05 00:02:36.652276 | orchestrator | + access_network = false 2026-01-05 00:02:36.652280 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-05 00:02:36.652283 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-05 00:02:36.652287 | orchestrator | + mac = (known after apply) 2026-01-05 00:02:36.652291 | orchestrator | + name = (known after apply) 2026-01-05 00:02:36.652295 | orchestrator | + port = (known after apply) 2026-01-05 00:02:36.652299 | orchestrator | + uuid = (known after apply) 2026-01-05 00:02:36.652303 | orchestrator | } 2026-01-05 00:02:36.652306 | orchestrator | } 2026-01-05 00:02:36.652484 | orchestrator | 2026-01-05 00:02:36.652496 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-01-05 00:02:36.652501 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-05 00:02:36.652505 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-05 00:02:36.652508 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-05 00:02:36.652512 | orchestrator | + all_metadata = (known after apply) 2026-01-05 00:02:36.652516 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:36.652520 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:36.652524 | orchestrator | + config_drive = true 2026-01-05 00:02:36.652527 | orchestrator | + created = (known after apply) 2026-01-05 00:02:36.652531 | orchestrator | + flavor_id = (known after apply) 2026-01-05 00:02:36.652535 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-05 00:02:36.652538 | orchestrator | + force_delete = false 2026-01-05 00:02:36.652542 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-05 00:02:36.652546 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.652550 | orchestrator | + image_id = (known after apply) 2026-01-05 00:02:36.652553 | orchestrator | + image_name = (known after apply) 2026-01-05 00:02:36.652557 | orchestrator | + key_pair = "testbed" 2026-01-05 00:02:36.652561 | orchestrator | + name = "testbed-node-4" 2026-01-05 00:02:36.652565 | orchestrator | + power_state = "active" 2026-01-05 00:02:36.652568 | orchestrator | + region = (known after apply) 2026-01-05 00:02:36.652572 | orchestrator | + security_groups = (known after apply) 2026-01-05 00:02:36.652576 | orchestrator | + stop_before_destroy = false 2026-01-05 00:02:36.652579 | orchestrator | + updated = (known after apply) 2026-01-05 00:02:36.652583 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-05 00:02:36.652587 | orchestrator | 2026-01-05 00:02:36.652591 | orchestrator | + block_device { 2026-01-05 00:02:36.652595 | orchestrator | + boot_index = 0 2026-01-05 00:02:36.652598 | orchestrator | + delete_on_termination = false 2026-01-05 00:02:36.652602 | orchestrator | + destination_type = "volume" 2026-01-05 00:02:36.652606 | orchestrator | + multiattach = false 2026-01-05 00:02:36.652610 | orchestrator | + source_type = "volume" 2026-01-05 00:02:36.652613 | orchestrator | + uuid = (known after apply) 2026-01-05 00:02:36.652617 | orchestrator | } 2026-01-05 00:02:36.652621 | orchestrator | 2026-01-05 00:02:36.652625 | orchestrator | + network { 2026-01-05 00:02:36.652628 | orchestrator | + access_network = false 2026-01-05 00:02:36.652632 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-05 00:02:36.652636 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-05 00:02:36.652640 | orchestrator | + mac = (known after apply) 2026-01-05 00:02:36.652643 | orchestrator | + name = (known after apply) 2026-01-05 00:02:36.652647 | orchestrator | + port = (known after apply) 2026-01-05 00:02:36.652651 | orchestrator | + uuid = (known after apply) 2026-01-05 00:02:36.652655 | orchestrator | } 2026-01-05 00:02:36.652658 | orchestrator | } 2026-01-05 00:02:36.652837 | orchestrator | 2026-01-05 00:02:36.652874 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-01-05 00:02:36.652879 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-05 00:02:36.652883 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-05 00:02:36.652887 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-05 00:02:36.652891 | orchestrator | + all_metadata = (known after apply) 2026-01-05 00:02:36.652895 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:36.652898 | orchestrator | + availability_zone = "nova" 2026-01-05 00:02:36.652902 | orchestrator | + config_drive = true 2026-01-05 00:02:36.652906 | orchestrator | + created = (known after apply) 2026-01-05 00:02:36.652910 | orchestrator | + flavor_id = (known after apply) 2026-01-05 00:02:36.652913 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-05 00:02:36.652917 | orchestrator | + force_delete = false 2026-01-05 00:02:36.652921 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-05 00:02:36.652925 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.652928 | orchestrator | + image_id = (known after apply) 2026-01-05 00:02:36.652932 | orchestrator | + image_name = (known after apply) 2026-01-05 00:02:36.652936 | orchestrator | + key_pair = "testbed" 2026-01-05 00:02:36.652939 | orchestrator | + name = "testbed-node-5" 2026-01-05 00:02:36.652943 | orchestrator | + power_state = "active" 2026-01-05 00:02:36.652947 | orchestrator | + region = (known after apply) 2026-01-05 00:02:36.652951 | orchestrator | + security_groups = (known after apply) 2026-01-05 00:02:36.652954 | orchestrator | + stop_before_destroy = false 2026-01-05 00:02:36.652958 | orchestrator | + updated = (known after apply) 2026-01-05 00:02:36.652962 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-05 00:02:36.652966 | orchestrator | 2026-01-05 00:02:36.652970 | orchestrator | + block_device { 2026-01-05 00:02:36.652973 | orchestrator | + boot_index = 0 2026-01-05 00:02:36.652977 | orchestrator | + delete_on_termination = false 2026-01-05 00:02:36.652981 | orchestrator | + destination_type = "volume" 2026-01-05 00:02:36.652985 | orchestrator | + multiattach = false 2026-01-05 00:02:36.652988 | orchestrator | + source_type = "volume" 2026-01-05 00:02:36.652992 | orchestrator | + uuid = (known after apply) 2026-01-05 00:02:36.652996 | orchestrator | } 2026-01-05 00:02:36.653000 | orchestrator | 2026-01-05 00:02:36.653003 | orchestrator | + network { 2026-01-05 00:02:36.653007 | orchestrator | + access_network = false 2026-01-05 00:02:36.653011 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-05 00:02:36.653015 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-05 00:02:36.653018 | orchestrator | + mac = (known after apply) 2026-01-05 00:02:36.653022 | orchestrator | + name = (known after apply) 2026-01-05 00:02:36.653026 | orchestrator | + port = (known after apply) 2026-01-05 00:02:36.653030 | orchestrator | + uuid = (known after apply) 2026-01-05 00:02:36.653033 | orchestrator | } 2026-01-05 00:02:36.653037 | orchestrator | } 2026-01-05 00:02:36.653085 | orchestrator | 2026-01-05 00:02:36.653097 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-01-05 00:02:36.653101 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-01-05 00:02:36.653105 | orchestrator | + fingerprint = (known after apply) 2026-01-05 00:02:36.653109 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.653112 | orchestrator | + name = "testbed" 2026-01-05 00:02:36.653116 | orchestrator | + private_key = (sensitive value) 2026-01-05 00:02:36.653120 | orchestrator | + public_key = (known after apply) 2026-01-05 00:02:36.653123 | orchestrator | + region = (known after apply) 2026-01-05 00:02:36.653127 | orchestrator | + user_id = (known after apply) 2026-01-05 00:02:36.653131 | orchestrator | } 2026-01-05 00:02:36.653167 | orchestrator | 2026-01-05 00:02:36.653178 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-01-05 00:02:36.653183 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-05 00:02:36.653191 | orchestrator | + device = (known after apply) 2026-01-05 00:02:36.653195 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.653198 | orchestrator | + instance_id = (known after apply) 2026-01-05 00:02:36.653202 | orchestrator | + region = (known after apply) 2026-01-05 00:02:36.653212 | orchestrator | + volume_id = (known after apply) 2026-01-05 00:02:36.653216 | orchestrator | } 2026-01-05 00:02:36.653252 | orchestrator | 2026-01-05 00:02:36.653264 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-01-05 00:02:36.653268 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-05 00:02:36.653272 | orchestrator | + device = (known after apply) 2026-01-05 00:02:36.653276 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.653279 | orchestrator | + instance_id = (known after apply) 2026-01-05 00:02:36.653283 | orchestrator | + region = (known after apply) 2026-01-05 00:02:36.653287 | orchestrator | + volume_id = (known after apply) 2026-01-05 00:02:36.653291 | orchestrator | } 2026-01-05 00:02:36.653327 | orchestrator | 2026-01-05 00:02:36.653338 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-01-05 00:02:36.653343 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-05 00:02:36.653346 | orchestrator | + device = (known after apply) 2026-01-05 00:02:36.653350 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.653354 | orchestrator | + instance_id = (known after apply) 2026-01-05 00:02:36.653358 | orchestrator | + region = (known after apply) 2026-01-05 00:02:36.653362 | orchestrator | + volume_id = (known after apply) 2026-01-05 00:02:36.653365 | orchestrator | } 2026-01-05 00:02:36.653402 | orchestrator | 2026-01-05 00:02:36.653414 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-01-05 00:02:36.653418 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-05 00:02:36.653422 | orchestrator | + device = (known after apply) 2026-01-05 00:02:36.653426 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.653429 | orchestrator | + instance_id = (known after apply) 2026-01-05 00:02:36.653433 | orchestrator | + region = (known after apply) 2026-01-05 00:02:36.653437 | orchestrator | + volume_id = (known after apply) 2026-01-05 00:02:36.653441 | orchestrator | } 2026-01-05 00:02:36.653473 | orchestrator | 2026-01-05 00:02:36.653485 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-01-05 00:02:36.653489 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-05 00:02:36.653493 | orchestrator | + device = (known after apply) 2026-01-05 00:02:36.653497 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.653500 | orchestrator | + instance_id = (known after apply) 2026-01-05 00:02:36.653504 | orchestrator | + region = (known after apply) 2026-01-05 00:02:36.653508 | orchestrator | + volume_id = (known after apply) 2026-01-05 00:02:36.653512 | orchestrator | } 2026-01-05 00:02:36.653543 | orchestrator | 2026-01-05 00:02:36.653554 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-01-05 00:02:36.653558 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-05 00:02:36.653562 | orchestrator | + device = (known after apply) 2026-01-05 00:02:36.653566 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.653570 | orchestrator | + instance_id = (known after apply) 2026-01-05 00:02:36.653574 | orchestrator | + region = (known after apply) 2026-01-05 00:02:36.653577 | orchestrator | + volume_id = (known after apply) 2026-01-05 00:02:36.653581 | orchestrator | } 2026-01-05 00:02:36.653618 | orchestrator | 2026-01-05 00:02:36.653629 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-01-05 00:02:36.653634 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-05 00:02:36.653638 | orchestrator | + device = (known after apply) 2026-01-05 00:02:36.653641 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.653645 | orchestrator | + instance_id = (known after apply) 2026-01-05 00:02:36.653649 | orchestrator | + region = (known after apply) 2026-01-05 00:02:36.653656 | orchestrator | + volume_id = (known after apply) 2026-01-05 00:02:36.653660 | orchestrator | } 2026-01-05 00:02:36.653692 | orchestrator | 2026-01-05 00:02:36.653704 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-01-05 00:02:36.653708 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-05 00:02:36.653712 | orchestrator | + device = (known after apply) 2026-01-05 00:02:36.653716 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.653720 | orchestrator | + instance_id = (known after apply) 2026-01-05 00:02:36.653723 | orchestrator | + region = (known after apply) 2026-01-05 00:02:36.653727 | orchestrator | + volume_id = (known after apply) 2026-01-05 00:02:36.653731 | orchestrator | } 2026-01-05 00:02:36.653765 | orchestrator | 2026-01-05 00:02:36.653777 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-01-05 00:02:36.653781 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-05 00:02:36.653785 | orchestrator | + device = (known after apply) 2026-01-05 00:02:36.653789 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.653792 | orchestrator | + instance_id = (known after apply) 2026-01-05 00:02:36.653796 | orchestrator | + region = (known after apply) 2026-01-05 00:02:36.653800 | orchestrator | + volume_id = (known after apply) 2026-01-05 00:02:36.653804 | orchestrator | } 2026-01-05 00:02:36.653837 | orchestrator | 2026-01-05 00:02:36.653860 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-01-05 00:02:36.653866 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-01-05 00:02:36.653869 | orchestrator | + fixed_ip = (known after apply) 2026-01-05 00:02:36.653873 | orchestrator | + floating_ip = (known after apply) 2026-01-05 00:02:36.653877 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.653881 | orchestrator | + port_id = (known after apply) 2026-01-05 00:02:36.653884 | orchestrator | + region = (known after apply) 2026-01-05 00:02:36.653888 | orchestrator | } 2026-01-05 00:02:36.653947 | orchestrator | 2026-01-05 00:02:36.653958 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-01-05 00:02:36.653963 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-01-05 00:02:36.653967 | orchestrator | + address = (known after apply) 2026-01-05 00:02:36.653971 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:36.653978 | orchestrator | + dns_domain = (known after apply) 2026-01-05 00:02:36.653982 | orchestrator | + dns_name = (known after apply) 2026-01-05 00:02:36.653985 | orchestrator | + fixed_ip = (known after apply) 2026-01-05 00:02:36.653989 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.653993 | orchestrator | + pool = "public" 2026-01-05 00:02:36.653997 | orchestrator | + port_id = (known after apply) 2026-01-05 00:02:36.654001 | orchestrator | + region = (known after apply) 2026-01-05 00:02:36.654005 | orchestrator | + subnet_id = (known after apply) 2026-01-05 00:02:36.654008 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:36.654033 | orchestrator | } 2026-01-05 00:02:36.654118 | orchestrator | 2026-01-05 00:02:36.654130 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-01-05 00:02:36.654135 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-01-05 00:02:36.654138 | orchestrator | + admin_state_up = (known after apply) 2026-01-05 00:02:36.654142 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:36.654146 | orchestrator | + availability_zone_hints = [ 2026-01-05 00:02:36.654150 | orchestrator | + "nova", 2026-01-05 00:02:36.654154 | orchestrator | ] 2026-01-05 00:02:36.654158 | orchestrator | + dns_domain = (known after apply) 2026-01-05 00:02:36.654161 | orchestrator | + external = (known after apply) 2026-01-05 00:02:36.654165 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.654169 | orchestrator | + mtu = (known after apply) 2026-01-05 00:02:36.654173 | orchestrator | + name = "net-testbed-management" 2026-01-05 00:02:36.654176 | orchestrator | + port_security_enabled = (known after apply) 2026-01-05 00:02:36.654184 | orchestrator | + qos_policy_id = (known after apply) 2026-01-05 00:02:36.654188 | orchestrator | + region = (known after apply) 2026-01-05 00:02:36.654192 | orchestrator | + shared = (known after apply) 2026-01-05 00:02:36.654196 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:36.654200 | orchestrator | + transparent_vlan = (known after apply) 2026-01-05 00:02:36.654203 | orchestrator | 2026-01-05 00:02:36.654207 | orchestrator | + segments (known after apply) 2026-01-05 00:02:36.654211 | orchestrator | } 2026-01-05 00:02:36.654327 | orchestrator | 2026-01-05 00:02:36.654339 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-01-05 00:02:36.654343 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-01-05 00:02:36.654347 | orchestrator | + admin_state_up = (known after apply) 2026-01-05 00:02:36.654351 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-05 00:02:36.654355 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-05 00:02:36.654359 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:36.654363 | orchestrator | + device_id = (known after apply) 2026-01-05 00:02:36.654366 | orchestrator | + device_owner = (known after apply) 2026-01-05 00:02:36.654370 | orchestrator | + dns_assignment = (known after apply) 2026-01-05 00:02:36.654374 | orchestrator | + dns_name = (known after apply) 2026-01-05 00:02:36.654378 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.654381 | orchestrator | + mac_address = (known after apply) 2026-01-05 00:02:36.654385 | orchestrator | + network_id = (known after apply) 2026-01-05 00:02:36.654389 | orchestrator | + port_security_enabled = (known after apply) 2026-01-05 00:02:36.654393 | orchestrator | + qos_policy_id = (known after apply) 2026-01-05 00:02:36.654397 | orchestrator | + region = (known after apply) 2026-01-05 00:02:36.654400 | orchestrator | + security_group_ids = (known after apply) 2026-01-05 00:02:36.654404 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:36.654408 | orchestrator | 2026-01-05 00:02:36.654412 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:36.654416 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-05 00:02:36.654419 | orchestrator | } 2026-01-05 00:02:36.654423 | orchestrator | 2026-01-05 00:02:36.654427 | orchestrator | + binding (known after apply) 2026-01-05 00:02:36.654431 | orchestrator | 2026-01-05 00:02:36.654435 | orchestrator | + fixed_ip { 2026-01-05 00:02:36.654439 | orchestrator | + ip_address = "192.168.16.5" 2026-01-05 00:02:36.654443 | orchestrator | + subnet_id = (known after apply) 2026-01-05 00:02:36.654446 | orchestrator | } 2026-01-05 00:02:36.654450 | orchestrator | } 2026-01-05 00:02:36.654579 | orchestrator | 2026-01-05 00:02:36.654590 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-01-05 00:02:36.654595 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-05 00:02:36.654599 | orchestrator | + admin_state_up = (known after apply) 2026-01-05 00:02:36.654603 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-05 00:02:36.654606 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-05 00:02:36.654610 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:36.654614 | orchestrator | + device_id = (known after apply) 2026-01-05 00:02:36.654618 | orchestrator | + device_owner = (known after apply) 2026-01-05 00:02:36.654622 | orchestrator | + dns_assignment = (known after apply) 2026-01-05 00:02:36.654625 | orchestrator | + dns_name = (known after apply) 2026-01-05 00:02:36.654629 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.654633 | orchestrator | + mac_address = (known after apply) 2026-01-05 00:02:36.654637 | orchestrator | + network_id = (known after apply) 2026-01-05 00:02:36.654641 | orchestrator | + port_security_enabled = (known after apply) 2026-01-05 00:02:36.654645 | orchestrator | + qos_policy_id = (known after apply) 2026-01-05 00:02:36.654648 | orchestrator | + region = (known after apply) 2026-01-05 00:02:36.654656 | orchestrator | + security_group_ids = (known after apply) 2026-01-05 00:02:36.654660 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:36.654664 | orchestrator | 2026-01-05 00:02:36.654668 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:36.654672 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-05 00:02:36.654676 | orchestrator | } 2026-01-05 00:02:36.654680 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:36.654683 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-05 00:02:36.654687 | orchestrator | } 2026-01-05 00:02:36.654691 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:36.654695 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-05 00:02:36.654699 | orchestrator | } 2026-01-05 00:02:36.654703 | orchestrator | 2026-01-05 00:02:36.654707 | orchestrator | + binding (known after apply) 2026-01-05 00:02:36.654710 | orchestrator | 2026-01-05 00:02:36.654714 | orchestrator | + fixed_ip { 2026-01-05 00:02:36.654718 | orchestrator | + ip_address = "192.168.16.10" 2026-01-05 00:02:36.654722 | orchestrator | + subnet_id = (known after apply) 2026-01-05 00:02:36.654726 | orchestrator | } 2026-01-05 00:02:36.654730 | orchestrator | } 2026-01-05 00:02:36.654903 | orchestrator | 2026-01-05 00:02:36.654917 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-01-05 00:02:36.654921 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-05 00:02:36.654932 | orchestrator | + admin_state_up = (known after apply) 2026-01-05 00:02:36.654936 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-05 00:02:36.654940 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-05 00:02:36.654944 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:36.654947 | orchestrator | + device_id = (known after apply) 2026-01-05 00:02:36.654951 | orchestrator | + device_owner = (known after apply) 2026-01-05 00:02:36.654955 | orchestrator | + dns_assignment = (known after apply) 2026-01-05 00:02:36.654959 | orchestrator | + dns_name = (known after apply) 2026-01-05 00:02:36.654963 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.654967 | orchestrator | + mac_address = (known after apply) 2026-01-05 00:02:36.654970 | orchestrator | + network_id = (known after apply) 2026-01-05 00:02:36.654974 | orchestrator | + port_security_enabled = (known after apply) 2026-01-05 00:02:36.654978 | orchestrator | + qos_policy_id = (known after apply) 2026-01-05 00:02:36.654982 | orchestrator | + region = (known after apply) 2026-01-05 00:02:36.654986 | orchestrator | + security_group_ids = (known after apply) 2026-01-05 00:02:36.654989 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:36.654993 | orchestrator | 2026-01-05 00:02:36.654997 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:36.655001 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-05 00:02:36.655005 | orchestrator | } 2026-01-05 00:02:36.655009 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:36.655013 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-05 00:02:36.655017 | orchestrator | } 2026-01-05 00:02:36.655021 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:36.655025 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-05 00:02:36.655029 | orchestrator | } 2026-01-05 00:02:36.655032 | orchestrator | 2026-01-05 00:02:36.655036 | orchestrator | + binding (known after apply) 2026-01-05 00:02:36.655040 | orchestrator | 2026-01-05 00:02:36.655044 | orchestrator | + fixed_ip { 2026-01-05 00:02:36.655048 | orchestrator | + ip_address = "192.168.16.11" 2026-01-05 00:02:36.655051 | orchestrator | + subnet_id = (known after apply) 2026-01-05 00:02:36.655055 | orchestrator | } 2026-01-05 00:02:36.655059 | orchestrator | } 2026-01-05 00:02:36.655192 | orchestrator | 2026-01-05 00:02:36.655204 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-01-05 00:02:36.655208 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-05 00:02:36.655212 | orchestrator | + admin_state_up = (known after apply) 2026-01-05 00:02:36.655216 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-05 00:02:36.655219 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-05 00:02:36.655223 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:36.655232 | orchestrator | + device_id = (known after apply) 2026-01-05 00:02:36.655236 | orchestrator | + device_owner = (known after apply) 2026-01-05 00:02:36.655239 | orchestrator | + dns_assignment = (known after apply) 2026-01-05 00:02:36.655243 | orchestrator | + dns_name = (known after apply) 2026-01-05 00:02:36.655247 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.655251 | orchestrator | + mac_address = (known after apply) 2026-01-05 00:02:36.655255 | orchestrator | + network_id = (known after apply) 2026-01-05 00:02:36.655258 | orchestrator | + port_security_enabled = (known after apply) 2026-01-05 00:02:36.655262 | orchestrator | + qos_policy_id = (known after apply) 2026-01-05 00:02:36.655266 | orchestrator | + region = (known after apply) 2026-01-05 00:02:36.655270 | orchestrator | + security_group_ids = (known after apply) 2026-01-05 00:02:36.655274 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:36.655277 | orchestrator | 2026-01-05 00:02:36.655281 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:36.655285 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-05 00:02:36.655289 | orchestrator | } 2026-01-05 00:02:36.655293 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:36.655297 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-05 00:02:36.655300 | orchestrator | } 2026-01-05 00:02:36.655304 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:36.655308 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-05 00:02:36.655312 | orchestrator | } 2026-01-05 00:02:36.655316 | orchestrator | 2026-01-05 00:02:36.655320 | orchestrator | + binding (known after apply) 2026-01-05 00:02:36.655323 | orchestrator | 2026-01-05 00:02:36.655327 | orchestrator | + fixed_ip { 2026-01-05 00:02:36.655331 | orchestrator | + ip_address = "192.168.16.12" 2026-01-05 00:02:36.655335 | orchestrator | + subnet_id = (known after apply) 2026-01-05 00:02:36.655339 | orchestrator | } 2026-01-05 00:02:36.655342 | orchestrator | } 2026-01-05 00:02:36.655477 | orchestrator | 2026-01-05 00:02:36.655489 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-01-05 00:02:36.655493 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-05 00:02:36.655497 | orchestrator | + admin_state_up = (known after apply) 2026-01-05 00:02:36.655501 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-05 00:02:36.655505 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-05 00:02:36.655508 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:36.655512 | orchestrator | + device_id = (known after apply) 2026-01-05 00:02:36.655516 | orchestrator | + device_owner = (known after apply) 2026-01-05 00:02:36.655520 | orchestrator | + dns_assignment = (known after apply) 2026-01-05 00:02:36.655524 | orchestrator | + dns_name = (known after apply) 2026-01-05 00:02:36.655528 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.655531 | orchestrator | + mac_address = (known after apply) 2026-01-05 00:02:36.655535 | orchestrator | + network_id = (known after apply) 2026-01-05 00:02:36.655539 | orchestrator | + port_security_enabled = (known after apply) 2026-01-05 00:02:36.655543 | orchestrator | + qos_policy_id = (known after apply) 2026-01-05 00:02:36.655547 | orchestrator | + region = (known after apply) 2026-01-05 00:02:36.655550 | orchestrator | + security_group_ids = (known after apply) 2026-01-05 00:02:36.655554 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:36.655558 | orchestrator | 2026-01-05 00:02:36.655562 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:36.655566 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-05 00:02:36.655570 | orchestrator | } 2026-01-05 00:02:36.655573 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:36.655577 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-05 00:02:36.655581 | orchestrator | } 2026-01-05 00:02:36.655585 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:36.655589 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-05 00:02:36.655592 | orchestrator | } 2026-01-05 00:02:36.655596 | orchestrator | 2026-01-05 00:02:36.655604 | orchestrator | + binding (known after apply) 2026-01-05 00:02:36.655608 | orchestrator | 2026-01-05 00:02:36.655611 | orchestrator | + fixed_ip { 2026-01-05 00:02:36.655615 | orchestrator | + ip_address = "192.168.16.13" 2026-01-05 00:02:36.655619 | orchestrator | + subnet_id = (known after apply) 2026-01-05 00:02:36.655623 | orchestrator | } 2026-01-05 00:02:36.655627 | orchestrator | } 2026-01-05 00:02:36.655752 | orchestrator | 2026-01-05 00:02:36.655763 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-01-05 00:02:36.655768 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-05 00:02:36.655772 | orchestrator | + admin_state_up = (known after apply) 2026-01-05 00:02:36.655776 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-05 00:02:36.655779 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-05 00:02:36.655783 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:36.655787 | orchestrator | + device_id = (known after apply) 2026-01-05 00:02:36.655791 | orchestrator | + device_owner = (known after apply) 2026-01-05 00:02:36.655795 | orchestrator | + dns_assignment = (known after apply) 2026-01-05 00:02:36.655798 | orchestrator | + dns_name = (known after apply) 2026-01-05 00:02:36.655805 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.655809 | orchestrator | + mac_address = (known after apply) 2026-01-05 00:02:36.655812 | orchestrator | + network_id = (known after apply) 2026-01-05 00:02:36.655816 | orchestrator | + port_security_enabled = (known after apply) 2026-01-05 00:02:36.655820 | orchestrator | + qos_policy_id = (known after apply) 2026-01-05 00:02:36.655824 | orchestrator | + region = (known after apply) 2026-01-05 00:02:36.655828 | orchestrator | + security_group_ids = (known after apply) 2026-01-05 00:02:36.655832 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:36.655836 | orchestrator | 2026-01-05 00:02:36.655840 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:36.655846 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-05 00:02:36.655860 | orchestrator | } 2026-01-05 00:02:36.655864 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:36.655868 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-05 00:02:36.655872 | orchestrator | } 2026-01-05 00:02:36.655876 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:36.655879 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-05 00:02:36.655883 | orchestrator | } 2026-01-05 00:02:36.655887 | orchestrator | 2026-01-05 00:02:36.655891 | orchestrator | + binding (known after apply) 2026-01-05 00:02:36.655895 | orchestrator | 2026-01-05 00:02:36.655899 | orchestrator | + fixed_ip { 2026-01-05 00:02:36.655903 | orchestrator | + ip_address = "192.168.16.14" 2026-01-05 00:02:36.655907 | orchestrator | + subnet_id = (known after apply) 2026-01-05 00:02:36.655910 | orchestrator | } 2026-01-05 00:02:36.655914 | orchestrator | } 2026-01-05 00:02:36.656045 | orchestrator | 2026-01-05 00:02:36.656057 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-01-05 00:02:36.656061 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-05 00:02:36.656065 | orchestrator | + admin_state_up = (known after apply) 2026-01-05 00:02:36.656069 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-05 00:02:36.656073 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-05 00:02:36.656077 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:36.656080 | orchestrator | + device_id = (known after apply) 2026-01-05 00:02:36.656084 | orchestrator | + device_owner = (known after apply) 2026-01-05 00:02:36.656088 | orchestrator | + dns_assignment = (known after apply) 2026-01-05 00:02:36.656092 | orchestrator | + dns_name = (known after apply) 2026-01-05 00:02:36.656096 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.656100 | orchestrator | + mac_address = (known after apply) 2026-01-05 00:02:36.656104 | orchestrator | + network_id = (known after apply) 2026-01-05 00:02:36.656107 | orchestrator | + port_security_enabled = (known after apply) 2026-01-05 00:02:36.656111 | orchestrator | + qos_policy_id = (known after apply) 2026-01-05 00:02:36.656119 | orchestrator | + region = (known after apply) 2026-01-05 00:02:36.656122 | orchestrator | + security_group_ids = (known after apply) 2026-01-05 00:02:36.656126 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:36.656130 | orchestrator | 2026-01-05 00:02:36.656134 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:36.656138 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-05 00:02:36.656142 | orchestrator | } 2026-01-05 00:02:36.656146 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:36.656149 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-05 00:02:36.656153 | orchestrator | } 2026-01-05 00:02:36.656157 | orchestrator | + allowed_address_pairs { 2026-01-05 00:02:36.656161 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-05 00:02:36.656165 | orchestrator | } 2026-01-05 00:02:36.656169 | orchestrator | 2026-01-05 00:02:36.656173 | orchestrator | + binding (known after apply) 2026-01-05 00:02:36.656177 | orchestrator | 2026-01-05 00:02:36.656180 | orchestrator | + fixed_ip { 2026-01-05 00:02:36.656184 | orchestrator | + ip_address = "192.168.16.15" 2026-01-05 00:02:36.656188 | orchestrator | + subnet_id = (known after apply) 2026-01-05 00:02:36.656192 | orchestrator | } 2026-01-05 00:02:36.656196 | orchestrator | } 2026-01-05 00:02:36.656240 | orchestrator | 2026-01-05 00:02:36.656251 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-01-05 00:02:36.656255 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-01-05 00:02:36.656259 | orchestrator | + force_destroy = false 2026-01-05 00:02:36.656263 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.656267 | orchestrator | + port_id = (known after apply) 2026-01-05 00:02:36.656271 | orchestrator | + region = (known after apply) 2026-01-05 00:02:36.656274 | orchestrator | + router_id = (known after apply) 2026-01-05 00:02:36.656278 | orchestrator | + subnet_id = (known after apply) 2026-01-05 00:02:36.656282 | orchestrator | } 2026-01-05 00:02:36.656358 | orchestrator | 2026-01-05 00:02:36.656370 | orchestrator | # openstack_networking_router_v2.router will be created 2026-01-05 00:02:36.656374 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-01-05 00:02:36.656378 | orchestrator | + admin_state_up = (known after apply) 2026-01-05 00:02:36.656382 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:36.656386 | orchestrator | + availability_zone_hints = [ 2026-01-05 00:02:36.656389 | orchestrator | + "nova", 2026-01-05 00:02:36.656393 | orchestrator | ] 2026-01-05 00:02:36.656397 | orchestrator | + distributed = (known after apply) 2026-01-05 00:02:36.656401 | orchestrator | + enable_snat = (known after apply) 2026-01-05 00:02:36.656405 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-01-05 00:02:36.656409 | orchestrator | + external_qos_policy_id = (known after apply) 2026-01-05 00:02:36.656412 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.656416 | orchestrator | + name = "testbed" 2026-01-05 00:02:36.656420 | orchestrator | + region = (known after apply) 2026-01-05 00:02:36.656424 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:36.656428 | orchestrator | 2026-01-05 00:02:36.656431 | orchestrator | + external_fixed_ip (known after apply) 2026-01-05 00:02:36.656435 | orchestrator | } 2026-01-05 00:02:36.656509 | orchestrator | 2026-01-05 00:02:36.656521 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-01-05 00:02:36.656526 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-01-05 00:02:36.656529 | orchestrator | + description = "ssh" 2026-01-05 00:02:36.656533 | orchestrator | + direction = "ingress" 2026-01-05 00:02:36.656537 | orchestrator | + ethertype = "IPv4" 2026-01-05 00:02:36.656541 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.656545 | orchestrator | + port_range_max = 22 2026-01-05 00:02:36.656549 | orchestrator | + port_range_min = 22 2026-01-05 00:02:36.656553 | orchestrator | + protocol = "tcp" 2026-01-05 00:02:36.656556 | orchestrator | + region = (known after apply) 2026-01-05 00:02:36.656563 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-05 00:02:36.656567 | orchestrator | + remote_group_id = (known after apply) 2026-01-05 00:02:36.656571 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-05 00:02:36.656575 | orchestrator | + security_group_id = (known after apply) 2026-01-05 00:02:36.656579 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:36.656583 | orchestrator | } 2026-01-05 00:02:36.656658 | orchestrator | 2026-01-05 00:02:36.656669 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-01-05 00:02:36.656674 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-01-05 00:02:36.656678 | orchestrator | + description = "wireguard" 2026-01-05 00:02:36.656682 | orchestrator | + direction = "ingress" 2026-01-05 00:02:36.656685 | orchestrator | + ethertype = "IPv4" 2026-01-05 00:02:36.656689 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.656693 | orchestrator | + port_range_max = 51820 2026-01-05 00:02:36.656697 | orchestrator | + port_range_min = 51820 2026-01-05 00:02:36.656701 | orchestrator | + protocol = "udp" 2026-01-05 00:02:36.656705 | orchestrator | + region = (known after apply) 2026-01-05 00:02:36.656708 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-05 00:02:36.656712 | orchestrator | + remote_group_id = (known after apply) 2026-01-05 00:02:36.656716 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-05 00:02:36.656720 | orchestrator | + security_group_id = (known after apply) 2026-01-05 00:02:36.656724 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:36.656728 | orchestrator | } 2026-01-05 00:02:36.656787 | orchestrator | 2026-01-05 00:02:36.656798 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-01-05 00:02:36.656802 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-01-05 00:02:36.656809 | orchestrator | + direction = "ingress" 2026-01-05 00:02:36.656813 | orchestrator | + ethertype = "IPv4" 2026-01-05 00:02:36.656817 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.656821 | orchestrator | + protocol = "tcp" 2026-01-05 00:02:36.656825 | orchestrator | + region = (known after apply) 2026-01-05 00:02:36.656828 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-05 00:02:36.656832 | orchestrator | + remote_group_id = (known after apply) 2026-01-05 00:02:36.656836 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-01-05 00:02:36.656840 | orchestrator | + security_group_id = (known after apply) 2026-01-05 00:02:36.656844 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:36.656848 | orchestrator | } 2026-01-05 00:02:36.656918 | orchestrator | 2026-01-05 00:02:36.656930 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-01-05 00:02:36.656934 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-01-05 00:02:36.656938 | orchestrator | + direction = "ingress" 2026-01-05 00:02:36.656942 | orchestrator | + ethertype = "IPv4" 2026-01-05 00:02:36.656945 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.656949 | orchestrator | + protocol = "udp" 2026-01-05 00:02:36.656953 | orchestrator | + region = (known after apply) 2026-01-05 00:02:36.656957 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-05 00:02:36.656961 | orchestrator | + remote_group_id = (known after apply) 2026-01-05 00:02:36.656965 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-01-05 00:02:36.656969 | orchestrator | + security_group_id = (known after apply) 2026-01-05 00:02:36.656972 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:36.656976 | orchestrator | } 2026-01-05 00:02:36.657035 | orchestrator | 2026-01-05 00:02:36.657047 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-01-05 00:02:36.657056 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-01-05 00:02:36.657060 | orchestrator | + direction = "ingress" 2026-01-05 00:02:36.657064 | orchestrator | + ethertype = "IPv4" 2026-01-05 00:02:36.657068 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.657072 | orchestrator | + protocol = "icmp" 2026-01-05 00:02:36.657075 | orchestrator | + region = (known after apply) 2026-01-05 00:02:36.657093 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-05 00:02:36.657097 | orchestrator | + remote_group_id = (known after apply) 2026-01-05 00:02:36.657100 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-05 00:02:36.657104 | orchestrator | + security_group_id = (known after apply) 2026-01-05 00:02:36.657108 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:36.657112 | orchestrator | } 2026-01-05 00:02:36.657242 | orchestrator | 2026-01-05 00:02:36.657257 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-01-05 00:02:36.657261 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-01-05 00:02:36.657265 | orchestrator | + direction = "ingress" 2026-01-05 00:02:36.657269 | orchestrator | + ethertype = "IPv4" 2026-01-05 00:02:36.657273 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.657287 | orchestrator | + protocol = "tcp" 2026-01-05 00:02:36.657291 | orchestrator | + region = (known after apply) 2026-01-05 00:02:36.657295 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-05 00:02:36.657299 | orchestrator | + remote_group_id = (known after apply) 2026-01-05 00:02:36.657302 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-05 00:02:36.657306 | orchestrator | + security_group_id = (known after apply) 2026-01-05 00:02:36.657310 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:36.657314 | orchestrator | } 2026-01-05 00:02:36.657382 | orchestrator | 2026-01-05 00:02:36.657408 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-01-05 00:02:36.657413 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-01-05 00:02:36.657417 | orchestrator | + direction = "ingress" 2026-01-05 00:02:36.657421 | orchestrator | + ethertype = "IPv4" 2026-01-05 00:02:36.657424 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.657428 | orchestrator | + protocol = "udp" 2026-01-05 00:02:36.657432 | orchestrator | + region = (known after apply) 2026-01-05 00:02:36.657436 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-05 00:02:36.657440 | orchestrator | + remote_group_id = (known after apply) 2026-01-05 00:02:36.657444 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-05 00:02:36.657447 | orchestrator | + security_group_id = (known after apply) 2026-01-05 00:02:36.657451 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:36.657455 | orchestrator | } 2026-01-05 00:02:36.657530 | orchestrator | 2026-01-05 00:02:36.657542 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-01-05 00:02:36.657556 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-01-05 00:02:36.657560 | orchestrator | + direction = "ingress" 2026-01-05 00:02:36.657564 | orchestrator | + ethertype = "IPv4" 2026-01-05 00:02:36.657568 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.657571 | orchestrator | + protocol = "icmp" 2026-01-05 00:02:36.657575 | orchestrator | + region = (known after apply) 2026-01-05 00:02:36.657579 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-05 00:02:36.657583 | orchestrator | + remote_group_id = (known after apply) 2026-01-05 00:02:36.657586 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-05 00:02:36.657590 | orchestrator | + security_group_id = (known after apply) 2026-01-05 00:02:36.657594 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:36.657615 | orchestrator | } 2026-01-05 00:02:36.657682 | orchestrator | 2026-01-05 00:02:36.657708 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-01-05 00:02:36.657712 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-01-05 00:02:36.657717 | orchestrator | + description = "vrrp" 2026-01-05 00:02:36.657720 | orchestrator | + direction = "ingress" 2026-01-05 00:02:36.657724 | orchestrator | + ethertype = "IPv4" 2026-01-05 00:02:36.657728 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.657732 | orchestrator | + protocol = "112" 2026-01-05 00:02:36.657736 | orchestrator | + region = (known after apply) 2026-01-05 00:02:36.657739 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-05 00:02:36.657743 | orchestrator | + remote_group_id = (known after apply) 2026-01-05 00:02:36.657747 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-05 00:02:36.657751 | orchestrator | + security_group_id = (known after apply) 2026-01-05 00:02:36.657755 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:36.657758 | orchestrator | } 2026-01-05 00:02:36.657818 | orchestrator | 2026-01-05 00:02:36.657830 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-01-05 00:02:36.657834 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-01-05 00:02:36.657838 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:36.657842 | orchestrator | + description = "management security group" 2026-01-05 00:02:36.657880 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.657884 | orchestrator | + name = "testbed-management" 2026-01-05 00:02:36.657888 | orchestrator | + region = (known after apply) 2026-01-05 00:02:36.657892 | orchestrator | + stateful = (known after apply) 2026-01-05 00:02:36.657896 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:36.657900 | orchestrator | } 2026-01-05 00:02:36.657951 | orchestrator | 2026-01-05 00:02:36.657963 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-01-05 00:02:36.657968 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-01-05 00:02:36.657971 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:36.657984 | orchestrator | + description = "node security group" 2026-01-05 00:02:36.657988 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.657991 | orchestrator | + name = "testbed-node" 2026-01-05 00:02:36.657995 | orchestrator | + region = (known after apply) 2026-01-05 00:02:36.657999 | orchestrator | + stateful = (known after apply) 2026-01-05 00:02:36.658003 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:36.658007 | orchestrator | } 2026-01-05 00:02:36.658187 | orchestrator | 2026-01-05 00:02:36.658216 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-01-05 00:02:36.658220 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-01-05 00:02:36.658224 | orchestrator | + all_tags = (known after apply) 2026-01-05 00:02:36.658228 | orchestrator | + cidr = "192.168.16.0/20" 2026-01-05 00:02:36.658232 | orchestrator | + dns_nameservers = [ 2026-01-05 00:02:36.658236 | orchestrator | + "8.8.8.8", 2026-01-05 00:02:36.658240 | orchestrator | + "9.9.9.9", 2026-01-05 00:02:36.658244 | orchestrator | ] 2026-01-05 00:02:36.658248 | orchestrator | + enable_dhcp = true 2026-01-05 00:02:36.658251 | orchestrator | + gateway_ip = (known after apply) 2026-01-05 00:02:36.658259 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.658263 | orchestrator | + ip_version = 4 2026-01-05 00:02:36.658267 | orchestrator | + ipv6_address_mode = (known after apply) 2026-01-05 00:02:36.658271 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-01-05 00:02:36.658283 | orchestrator | + name = "subnet-testbed-management" 2026-01-05 00:02:36.658287 | orchestrator | + network_id = (known after apply) 2026-01-05 00:02:36.658291 | orchestrator | + no_gateway = false 2026-01-05 00:02:36.658294 | orchestrator | + region = (known after apply) 2026-01-05 00:02:36.658298 | orchestrator | + service_types = (known after apply) 2026-01-05 00:02:36.658307 | orchestrator | + tenant_id = (known after apply) 2026-01-05 00:02:36.658310 | orchestrator | 2026-01-05 00:02:36.658314 | orchestrator | + allocation_pool { 2026-01-05 00:02:36.658318 | orchestrator | + end = "192.168.31.250" 2026-01-05 00:02:36.658322 | orchestrator | + start = "192.168.31.200" 2026-01-05 00:02:36.658326 | orchestrator | } 2026-01-05 00:02:36.658329 | orchestrator | } 2026-01-05 00:02:36.658367 | orchestrator | 2026-01-05 00:02:36.658380 | orchestrator | # terraform_data.image will be created 2026-01-05 00:02:36.658384 | orchestrator | + resource "terraform_data" "image" { 2026-01-05 00:02:36.658388 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.658392 | orchestrator | + input = "Ubuntu 24.04" 2026-01-05 00:02:36.658396 | orchestrator | + output = (known after apply) 2026-01-05 00:02:36.658400 | orchestrator | } 2026-01-05 00:02:36.658456 | orchestrator | 2026-01-05 00:02:36.658470 | orchestrator | # terraform_data.image_node will be created 2026-01-05 00:02:36.658475 | orchestrator | + resource "terraform_data" "image_node" { 2026-01-05 00:02:36.658479 | orchestrator | + id = (known after apply) 2026-01-05 00:02:36.658482 | orchestrator | + input = "Ubuntu 24.04" 2026-01-05 00:02:36.658495 | orchestrator | + output = (known after apply) 2026-01-05 00:02:36.658499 | orchestrator | } 2026-01-05 00:02:36.658520 | orchestrator | 2026-01-05 00:02:36.658525 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-01-05 00:02:36.658537 | orchestrator | 2026-01-05 00:02:36.658544 | orchestrator | Changes to Outputs: 2026-01-05 00:02:36.658555 | orchestrator | + manager_address = (sensitive value) 2026-01-05 00:02:36.658562 | orchestrator | + private_key = (sensitive value) 2026-01-05 00:02:37.089602 | orchestrator | terraform_data.image: Creating... 2026-01-05 00:02:37.091714 | orchestrator | terraform_data.image: Creation complete after 0s [id=1182321f-2eef-63aa-40dd-1627bd05a344] 2026-01-05 00:02:37.091766 | orchestrator | terraform_data.image_node: Creating... 2026-01-05 00:02:37.091774 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=6aed8770-6dad-054b-1e95-cd9397097b3b] 2026-01-05 00:02:37.119016 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-01-05 00:02:37.128914 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-01-05 00:02:37.129263 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-01-05 00:02:37.130633 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-01-05 00:02:37.135622 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-01-05 00:02:37.135791 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-01-05 00:02:37.135988 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-01-05 00:02:37.147558 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-01-05 00:02:37.149844 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-01-05 00:02:37.155288 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-01-05 00:02:37.589047 | orchestrator | data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-01-05 00:02:37.597112 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-01-05 00:02:37.598935 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-01-05 00:02:37.603096 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-01-05 00:02:37.770097 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2026-01-05 00:02:37.779182 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-01-05 00:02:38.191469 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=3982651d-7412-4a1c-b434-0f6e386ce3f1] 2026-01-05 00:02:39.047787 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-01-05 00:02:40.767983 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=09f09123-b92e-4af4-8119-7d25e215193b] 2026-01-05 00:02:40.778193 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-01-05 00:02:40.778283 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=ead21d4d-eccd-4cd4-b0bf-ce9a2f7ae522] 2026-01-05 00:02:40.783929 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=ca851d29-aa00-48c4-a2d0-a646814f4a41] 2026-01-05 00:02:40.788292 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-01-05 00:02:40.789938 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-01-05 00:02:40.798933 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=bcde85c0-b124-4268-b34b-cc4a07cfe72d] 2026-01-05 00:02:40.803511 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-01-05 00:02:40.809476 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=1d3cc069-e4cd-473c-8ec3-e2e615e111a0] 2026-01-05 00:02:40.815305 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-01-05 00:02:40.841302 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=9f2df327-5b12-4442-ac27-592210953f70] 2026-01-05 00:02:40.854534 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-01-05 00:02:40.894930 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=99050707-7ba3-43f8-b640-7ac26fbd844b] 2026-01-05 00:02:40.906121 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=6e0b145f-2bfd-4824-bc37-4d4082c6f3f3] 2026-01-05 00:02:40.911535 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-01-05 00:02:40.916941 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=d7bc0b5556b80edcc603fd3f040423f651e5165e] 2026-01-05 00:02:40.922170 | orchestrator | local_file.id_rsa_pub: Creating... 2026-01-05 00:02:40.924825 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-01-05 00:02:40.927170 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=7d6e9fd13f798b5c42d533593cb61b0a2492e935] 2026-01-05 00:02:40.999657 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=6f88ade1-67f9-419a-b69f-9c70a1e62aa2] 2026-01-05 00:02:41.610658 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=8a26d9b4-73c3-4476-a81e-964fa64c46bb] 2026-01-05 00:02:42.007794 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=a87ce2bc-3a32-42ca-863f-972d86c57ddc] 2026-01-05 00:02:42.015127 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-01-05 00:02:44.222679 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=d306408f-4477-48ac-bb84-69da99054fdf] 2026-01-05 00:02:44.240816 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=994d72d0-f7fa-4ba3-8a27-05b8bd26fa8b] 2026-01-05 00:02:44.504458 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=3250b0f8-cf47-4b18-9931-22a1ebe34c49] 2026-01-05 00:02:44.504536 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=0b0d1c85-8aad-4201-aadd-214ecf9ccf0b] 2026-01-05 00:02:44.504550 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=7efae86a-8b4f-4401-bee7-529c12412766] 2026-01-05 00:02:44.504562 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=78340717-3c88-4c84-83f5-f931035dba88] 2026-01-05 00:02:45.746276 | orchestrator | openstack_networking_router_v2.router: Creation complete after 4s [id=4c8153af-5794-43df-a727-580490ddf237] 2026-01-05 00:02:45.760145 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-01-05 00:02:45.760469 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-01-05 00:02:45.762132 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-01-05 00:02:45.960442 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=dfde18ed-200c-4758-9b5c-0cf11837585a] 2026-01-05 00:02:45.973544 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-01-05 00:02:45.974356 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-01-05 00:02:45.975639 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-01-05 00:02:45.976569 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-01-05 00:02:45.980338 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-01-05 00:02:45.985011 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-01-05 00:02:45.985065 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-01-05 00:02:45.985080 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-01-05 00:02:46.068151 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=e6170edf-9e56-429c-8082-b1e8b6f8a237] 2026-01-05 00:02:46.080024 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-01-05 00:02:46.176016 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=20e04d15-f506-47c4-9386-db60a0b65304] 2026-01-05 00:02:46.187920 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-01-05 00:02:46.386491 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=b202e3d8-91a5-451e-b48a-b6089483e2bc] 2026-01-05 00:02:46.392968 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-01-05 00:02:46.575824 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=cd914fe0-5556-4dd1-a08f-b67891728363] 2026-01-05 00:02:46.584693 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-01-05 00:02:46.664212 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=9696381f-4efe-48e2-8e15-34c7e5ba763f] 2026-01-05 00:02:46.668085 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=9d818deb-906d-4bfe-8a59-9302ef8b7e5e] 2026-01-05 00:02:46.673660 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-01-05 00:02:46.673789 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-01-05 00:02:46.684663 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=a1ba6642-324c-44d6-9fc3-ce7e68d34321] 2026-01-05 00:02:46.689172 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-01-05 00:02:46.878129 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=6e2aeb4b-e5db-41a7-8b42-b41478428ccf] 2026-01-05 00:02:46.884022 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-01-05 00:02:46.887920 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=3045e1f0-8b4f-40d0-a3f0-13651ee6a259] 2026-01-05 00:02:46.934755 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=c69828a2-e120-4edf-8a04-86f26605ade1] 2026-01-05 00:02:46.935997 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=9020f612-3ec1-4b0d-b904-00f85a154b40] 2026-01-05 00:02:47.018267 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=e4322b01-b6c6-48d5-b8f1-c8e654b0a323] 2026-01-05 00:02:47.101499 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=e46a3318-4bb3-4be9-9553-aa08330510d2] 2026-01-05 00:02:47.154302 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=2bd3c1f4-21bf-4152-87a9-a808ebab8537] 2026-01-05 00:02:47.514279 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=e58df2ee-630a-41ab-8fa3-2f2ec8af018d] 2026-01-05 00:02:47.809407 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=b77ab6b4-db5f-429b-ace0-bd3477787003] 2026-01-05 00:02:47.932376 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=b01c9de4-c15f-4022-b400-188e51775351] 2026-01-05 00:02:48.897688 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=163aa3af-2332-4938-b695-94f9517870ac] 2026-01-05 00:02:48.930231 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-01-05 00:02:48.945388 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-01-05 00:02:48.946409 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-01-05 00:02:48.947248 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-01-05 00:02:48.952981 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-01-05 00:02:48.958203 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-01-05 00:02:48.965821 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-01-05 00:02:51.167838 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=0313d02a-f069-4408-915c-f2290899aba3] 2026-01-05 00:02:51.183448 | orchestrator | local_file.inventory: Creating... 2026-01-05 00:02:51.184159 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-01-05 00:02:51.187004 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-01-05 00:02:51.187621 | orchestrator | local_file.inventory: Creation complete after 0s [id=686fa4310e8a7fe74793d11453581c9be5778fe3] 2026-01-05 00:02:51.192344 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=3f6cdf1ef30c5f8008aacf3404e00b0ecb3a37aa] 2026-01-05 00:02:52.678394 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 2s [id=0313d02a-f069-4408-915c-f2290899aba3] 2026-01-05 00:02:58.953178 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-01-05 00:02:58.953314 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-01-05 00:02:58.953351 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-01-05 00:02:58.954386 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-01-05 00:02:58.960945 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-01-05 00:02:58.972287 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-01-05 00:03:08.962192 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-01-05 00:03:08.962310 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-01-05 00:03:08.962326 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-01-05 00:03:08.962339 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-01-05 00:03:08.962351 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-01-05 00:03:08.972664 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-01-05 00:03:18.962455 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-01-05 00:03:18.962597 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-01-05 00:03:18.963315 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-01-05 00:03:18.963344 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-01-05 00:03:18.963353 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-01-05 00:03:18.973867 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-01-05 00:03:19.904381 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=8bfa6e41-4dd6-473f-a6ec-b6775fae6b83] 2026-01-05 00:03:28.962750 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2026-01-05 00:03:28.962913 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [40s elapsed] 2026-01-05 00:03:28.962925 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [40s elapsed] 2026-01-05 00:03:28.962932 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2026-01-05 00:03:28.963037 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2026-01-05 00:03:30.010874 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 41s [id=8f6bacc7-4620-4dc7-b3af-aa18a2cac6c2] 2026-01-05 00:03:30.029511 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 41s [id=8fd011d3-e47b-474b-b961-3de76f9ff738] 2026-01-05 00:03:30.145300 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 41s [id=1438ab1a-88ff-4918-8246-167f3c15c4a1] 2026-01-05 00:03:30.203140 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 41s [id=cc940f82-455f-4b28-81c8-17f403c66c63] 2026-01-05 00:03:30.861307 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 42s [id=1c6a312f-3f9b-4f7d-84e4-546f642bdd3d] 2026-01-05 00:03:30.886092 | orchestrator | null_resource.node_semaphore: Creating... 2026-01-05 00:03:30.890170 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=3740660547999754401] 2026-01-05 00:03:30.890676 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-01-05 00:03:30.895221 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-01-05 00:03:30.896779 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-01-05 00:03:30.899711 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-01-05 00:03:30.909291 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-01-05 00:03:30.922270 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-01-05 00:03:30.923150 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-01-05 00:03:30.926348 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-01-05 00:03:30.935102 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-01-05 00:03:30.953304 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-01-05 00:03:34.296018 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 3s [id=8bfa6e41-4dd6-473f-a6ec-b6775fae6b83/6f88ade1-67f9-419a-b69f-9c70a1e62aa2] 2026-01-05 00:03:34.305607 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 3s [id=8f6bacc7-4620-4dc7-b3af-aa18a2cac6c2/6e0b145f-2bfd-4824-bc37-4d4082c6f3f3] 2026-01-05 00:03:34.334538 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 3s [id=8fd011d3-e47b-474b-b961-3de76f9ff738/ca851d29-aa00-48c4-a2d0-a646814f4a41] 2026-01-05 00:03:40.397658 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 9s [id=8bfa6e41-4dd6-473f-a6ec-b6775fae6b83/1d3cc069-e4cd-473c-8ec3-e2e615e111a0] 2026-01-05 00:03:40.428179 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 9s [id=8fd011d3-e47b-474b-b961-3de76f9ff738/99050707-7ba3-43f8-b640-7ac26fbd844b] 2026-01-05 00:03:40.428994 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 9s [id=8f6bacc7-4620-4dc7-b3af-aa18a2cac6c2/ead21d4d-eccd-4cd4-b0bf-ce9a2f7ae522] 2026-01-05 00:03:40.451833 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 9s [id=8bfa6e41-4dd6-473f-a6ec-b6775fae6b83/09f09123-b92e-4af4-8119-7d25e215193b] 2026-01-05 00:03:40.453883 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 9s [id=8f6bacc7-4620-4dc7-b3af-aa18a2cac6c2/9f2df327-5b12-4442-ac27-592210953f70] 2026-01-05 00:03:40.487719 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 9s [id=8fd011d3-e47b-474b-b961-3de76f9ff738/bcde85c0-b124-4268-b34b-cc4a07cfe72d] 2026-01-05 00:03:40.960998 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-01-05 00:03:50.961954 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-01-05 00:03:51.319844 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=f0bf74c0-3c9f-4c2d-87d6-a660fce5856f] 2026-01-05 00:03:54.632636 | orchestrator | 2026-01-05 00:03:54.632734 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-01-05 00:03:54.632743 | orchestrator | 2026-01-05 00:03:54.632750 | orchestrator | Outputs: 2026-01-05 00:03:54.632757 | orchestrator | 2026-01-05 00:03:54.632763 | orchestrator | manager_address = 2026-01-05 00:03:54.632771 | orchestrator | private_key = 2026-01-05 00:03:55.144132 | orchestrator | ok: Runtime: 0:01:26.585187 2026-01-05 00:03:55.171833 | 2026-01-05 00:03:55.171983 | TASK [Fetch manager address] 2026-01-05 00:03:55.661422 | orchestrator | ok 2026-01-05 00:03:55.679788 | 2026-01-05 00:03:55.680307 | TASK [Set manager_host address] 2026-01-05 00:03:55.777642 | orchestrator | ok 2026-01-05 00:03:55.793827 | 2026-01-05 00:03:55.794118 | LOOP [Update ansible collections] 2026-01-05 00:03:59.683057 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-01-05 00:03:59.683692 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-05 00:03:59.683781 | orchestrator | Starting galaxy collection install process 2026-01-05 00:03:59.683829 | orchestrator | Process install dependency map 2026-01-05 00:03:59.683871 | orchestrator | Starting collection install process 2026-01-05 00:03:59.683898 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons' 2026-01-05 00:03:59.683930 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons 2026-01-05 00:03:59.683964 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-01-05 00:03:59.684032 | orchestrator | ok: Item: commons Runtime: 0:00:03.528729 2026-01-05 00:04:00.764195 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-05 00:04:00.764467 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-01-05 00:04:00.764543 | orchestrator | Starting galaxy collection install process 2026-01-05 00:04:00.764593 | orchestrator | Process install dependency map 2026-01-05 00:04:00.764639 | orchestrator | Starting collection install process 2026-01-05 00:04:00.764681 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services' 2026-01-05 00:04:00.764724 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services 2026-01-05 00:04:00.764759 | orchestrator | osism.services:999.0.0 was installed successfully 2026-01-05 00:04:00.764814 | orchestrator | ok: Item: services Runtime: 0:00:00.783844 2026-01-05 00:04:00.797410 | 2026-01-05 00:04:00.797598 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-01-05 00:04:11.446012 | orchestrator | ok 2026-01-05 00:04:11.459111 | 2026-01-05 00:04:11.459420 | TASK [Wait a little longer for the manager so that everything is ready] 2026-01-05 00:05:11.504895 | orchestrator | ok 2026-01-05 00:05:11.513576 | 2026-01-05 00:05:11.513710 | TASK [Fetch manager ssh hostkey] 2026-01-05 00:05:13.095895 | orchestrator | Output suppressed because no_log was given 2026-01-05 00:05:13.107647 | 2026-01-05 00:05:13.107799 | TASK [Get ssh keypair from terraform environment] 2026-01-05 00:05:13.646114 | orchestrator | ok: Runtime: 0:00:00.010896 2026-01-05 00:05:13.654502 | 2026-01-05 00:05:13.654628 | TASK [Point out that the following task takes some time and does not give any output] 2026-01-05 00:05:13.709465 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-01-05 00:05:13.728127 | 2026-01-05 00:05:13.728394 | TASK [Run manager part 0] 2026-01-05 00:05:14.879248 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-05 00:05:14.941942 | orchestrator | 2026-01-05 00:05:14.942046 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-01-05 00:05:14.942061 | orchestrator | 2026-01-05 00:05:14.942081 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-01-05 00:05:16.858813 | orchestrator | ok: [testbed-manager] 2026-01-05 00:05:16.858905 | orchestrator | 2026-01-05 00:05:16.858947 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-01-05 00:05:16.858965 | orchestrator | 2026-01-05 00:05:16.858983 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-05 00:05:18.948091 | orchestrator | ok: [testbed-manager] 2026-01-05 00:05:18.948158 | orchestrator | 2026-01-05 00:05:18.948165 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-01-05 00:05:19.692213 | orchestrator | ok: [testbed-manager] 2026-01-05 00:05:19.692264 | orchestrator | 2026-01-05 00:05:19.692273 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-01-05 00:05:19.732721 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:05:19.732791 | orchestrator | 2026-01-05 00:05:19.732805 | orchestrator | TASK [Update package cache] **************************************************** 2026-01-05 00:05:19.758313 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:05:19.758374 | orchestrator | 2026-01-05 00:05:19.758382 | orchestrator | TASK [Install required packages] *********************************************** 2026-01-05 00:05:19.787023 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:05:19.787092 | orchestrator | 2026-01-05 00:05:19.787104 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-01-05 00:05:19.821225 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:05:19.821317 | orchestrator | 2026-01-05 00:05:19.821331 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-01-05 00:05:19.856106 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:05:19.856174 | orchestrator | 2026-01-05 00:05:19.856186 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-01-05 00:05:19.895963 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:05:19.896015 | orchestrator | 2026-01-05 00:05:19.896022 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-01-05 00:05:19.935219 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:05:19.935298 | orchestrator | 2026-01-05 00:05:19.935311 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-01-05 00:05:20.754815 | orchestrator | changed: [testbed-manager] 2026-01-05 00:05:20.754924 | orchestrator | 2026-01-05 00:05:20.754937 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-01-05 00:08:09.029963 | orchestrator | changed: [testbed-manager] 2026-01-05 00:08:09.030085 | orchestrator | 2026-01-05 00:08:09.030104 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-01-05 00:09:44.757514 | orchestrator | changed: [testbed-manager] 2026-01-05 00:09:44.757577 | orchestrator | 2026-01-05 00:09:44.757589 | orchestrator | TASK [Install required packages] *********************************************** 2026-01-05 00:10:09.216602 | orchestrator | changed: [testbed-manager] 2026-01-05 00:10:09.216682 | orchestrator | 2026-01-05 00:10:09.216703 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-01-05 00:10:18.734113 | orchestrator | changed: [testbed-manager] 2026-01-05 00:10:18.734187 | orchestrator | 2026-01-05 00:10:18.734199 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-01-05 00:10:18.768568 | orchestrator | ok: [testbed-manager] 2026-01-05 00:10:18.768645 | orchestrator | 2026-01-05 00:10:18.768652 | orchestrator | TASK [Get current user] ******************************************************** 2026-01-05 00:10:19.559542 | orchestrator | ok: [testbed-manager] 2026-01-05 00:10:19.559745 | orchestrator | 2026-01-05 00:10:19.559758 | orchestrator | TASK [Create venv directory] *************************************************** 2026-01-05 00:10:20.348989 | orchestrator | changed: [testbed-manager] 2026-01-05 00:10:20.349072 | orchestrator | 2026-01-05 00:10:20.349086 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-01-05 00:10:27.029623 | orchestrator | changed: [testbed-manager] 2026-01-05 00:10:27.029730 | orchestrator | 2026-01-05 00:10:27.029781 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-01-05 00:10:33.475201 | orchestrator | changed: [testbed-manager] 2026-01-05 00:10:33.475312 | orchestrator | 2026-01-05 00:10:33.475330 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-01-05 00:10:36.135488 | orchestrator | changed: [testbed-manager] 2026-01-05 00:10:36.135557 | orchestrator | 2026-01-05 00:10:36.135564 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-01-05 00:10:37.955143 | orchestrator | changed: [testbed-manager] 2026-01-05 00:10:37.955272 | orchestrator | 2026-01-05 00:10:37.955289 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-01-05 00:10:39.099183 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-01-05 00:10:39.099282 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-01-05 00:10:39.099298 | orchestrator | 2026-01-05 00:10:39.099311 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-01-05 00:10:39.150657 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-01-05 00:10:39.150738 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-01-05 00:10:39.150752 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-01-05 00:10:39.150765 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-01-05 00:10:42.614237 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-01-05 00:10:42.614341 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-01-05 00:10:42.614358 | orchestrator | 2026-01-05 00:10:42.614371 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-01-05 00:10:43.212131 | orchestrator | changed: [testbed-manager] 2026-01-05 00:10:43.212244 | orchestrator | 2026-01-05 00:10:43.212261 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-01-05 00:13:02.854140 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-01-05 00:13:02.854259 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-01-05 00:13:02.854276 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-01-05 00:13:02.854286 | orchestrator | 2026-01-05 00:13:02.854296 | orchestrator | TASK [Install local collections] *********************************************** 2026-01-05 00:13:05.247604 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-01-05 00:13:05.247666 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-01-05 00:13:05.247672 | orchestrator | 2026-01-05 00:13:05.247677 | orchestrator | PLAY [Create operator user] **************************************************** 2026-01-05 00:13:05.247681 | orchestrator | 2026-01-05 00:13:05.247685 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-05 00:13:06.735156 | orchestrator | ok: [testbed-manager] 2026-01-05 00:13:06.735206 | orchestrator | 2026-01-05 00:13:06.735215 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-01-05 00:13:06.791113 | orchestrator | ok: [testbed-manager] 2026-01-05 00:13:06.791181 | orchestrator | 2026-01-05 00:13:06.791192 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-01-05 00:13:06.864565 | orchestrator | ok: [testbed-manager] 2026-01-05 00:13:06.864636 | orchestrator | 2026-01-05 00:13:06.864652 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-01-05 00:13:07.712971 | orchestrator | changed: [testbed-manager] 2026-01-05 00:13:07.713023 | orchestrator | 2026-01-05 00:13:07.713032 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-01-05 00:13:08.487196 | orchestrator | changed: [testbed-manager] 2026-01-05 00:13:08.487270 | orchestrator | 2026-01-05 00:13:08.487285 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-01-05 00:13:09.915776 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-01-05 00:13:09.915861 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-01-05 00:13:09.915877 | orchestrator | 2026-01-05 00:13:09.915904 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-01-05 00:13:11.362533 | orchestrator | changed: [testbed-manager] 2026-01-05 00:13:11.362649 | orchestrator | 2026-01-05 00:13:11.362665 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-01-05 00:13:13.183832 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-01-05 00:13:13.184690 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-01-05 00:13:13.184719 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-01-05 00:13:13.184732 | orchestrator | 2026-01-05 00:13:13.184745 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-01-05 00:13:13.236607 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:13:13.236656 | orchestrator | 2026-01-05 00:13:13.236665 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-01-05 00:13:13.318097 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:13:13.318206 | orchestrator | 2026-01-05 00:13:13.318235 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-01-05 00:13:13.908395 | orchestrator | changed: [testbed-manager] 2026-01-05 00:13:13.908443 | orchestrator | 2026-01-05 00:13:13.908452 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-01-05 00:13:13.980261 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:13:13.980352 | orchestrator | 2026-01-05 00:13:13.980369 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-01-05 00:13:14.879601 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-05 00:13:14.879661 | orchestrator | changed: [testbed-manager] 2026-01-05 00:13:14.879671 | orchestrator | 2026-01-05 00:13:14.879680 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-01-05 00:13:14.919869 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:13:14.920115 | orchestrator | 2026-01-05 00:13:14.920131 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-01-05 00:13:14.962994 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:13:14.963056 | orchestrator | 2026-01-05 00:13:14.963070 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-01-05 00:13:15.002715 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:13:15.002782 | orchestrator | 2026-01-05 00:13:15.002800 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-01-05 00:13:15.066654 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:13:15.066713 | orchestrator | 2026-01-05 00:13:15.066726 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-01-05 00:13:15.814131 | orchestrator | ok: [testbed-manager] 2026-01-05 00:13:15.814174 | orchestrator | 2026-01-05 00:13:15.814193 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-01-05 00:13:15.814198 | orchestrator | 2026-01-05 00:13:15.814203 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-05 00:13:17.256963 | orchestrator | ok: [testbed-manager] 2026-01-05 00:13:17.257010 | orchestrator | 2026-01-05 00:13:17.257016 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-01-05 00:13:18.244872 | orchestrator | changed: [testbed-manager] 2026-01-05 00:13:18.245667 | orchestrator | 2026-01-05 00:13:18.245706 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:13:18.245720 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-01-05 00:13:18.245732 | orchestrator | 2026-01-05 00:13:18.603974 | orchestrator | ok: Runtime: 0:08:04.273526 2026-01-05 00:13:18.622256 | 2026-01-05 00:13:18.622527 | TASK [Point out that the log in on the manager is now possible] 2026-01-05 00:13:18.671381 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-01-05 00:13:18.682116 | 2026-01-05 00:13:18.682262 | TASK [Point out that the following task takes some time and does not give any output] 2026-01-05 00:13:18.718574 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-01-05 00:13:18.728367 | 2026-01-05 00:13:18.728571 | TASK [Run manager part 1 + 2] 2026-01-05 00:13:19.638992 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-05 00:13:19.693774 | orchestrator | 2026-01-05 00:13:19.693887 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-01-05 00:13:19.693907 | orchestrator | 2026-01-05 00:13:19.693937 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-05 00:13:22.766168 | orchestrator | ok: [testbed-manager] 2026-01-05 00:13:22.766287 | orchestrator | 2026-01-05 00:13:22.766356 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-01-05 00:13:22.805282 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:13:22.805375 | orchestrator | 2026-01-05 00:13:22.805395 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-01-05 00:13:22.852602 | orchestrator | ok: [testbed-manager] 2026-01-05 00:13:22.852662 | orchestrator | 2026-01-05 00:13:22.852673 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-05 00:13:22.898728 | orchestrator | ok: [testbed-manager] 2026-01-05 00:13:22.898827 | orchestrator | 2026-01-05 00:13:22.898848 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-05 00:13:22.968590 | orchestrator | ok: [testbed-manager] 2026-01-05 00:13:22.968648 | orchestrator | 2026-01-05 00:13:22.968656 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-05 00:13:23.023242 | orchestrator | ok: [testbed-manager] 2026-01-05 00:13:23.023300 | orchestrator | 2026-01-05 00:13:23.023306 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-05 00:13:23.066497 | orchestrator | included: /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-01-05 00:13:23.066578 | orchestrator | 2026-01-05 00:13:23.066590 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-05 00:13:23.841474 | orchestrator | ok: [testbed-manager] 2026-01-05 00:13:23.841707 | orchestrator | 2026-01-05 00:13:23.841731 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-05 00:13:23.885464 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:13:23.885524 | orchestrator | 2026-01-05 00:13:23.885546 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-05 00:13:25.291305 | orchestrator | changed: [testbed-manager] 2026-01-05 00:13:25.291401 | orchestrator | 2026-01-05 00:13:25.291421 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-05 00:13:25.884143 | orchestrator | ok: [testbed-manager] 2026-01-05 00:13:25.884199 | orchestrator | 2026-01-05 00:13:25.884209 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-05 00:13:27.077410 | orchestrator | changed: [testbed-manager] 2026-01-05 00:13:27.077472 | orchestrator | 2026-01-05 00:13:27.077484 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-05 00:13:43.715292 | orchestrator | changed: [testbed-manager] 2026-01-05 00:13:43.715360 | orchestrator | 2026-01-05 00:13:43.715368 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-01-05 00:13:44.388740 | orchestrator | ok: [testbed-manager] 2026-01-05 00:13:44.388835 | orchestrator | 2026-01-05 00:13:44.388854 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-01-05 00:13:44.437325 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:13:44.437406 | orchestrator | 2026-01-05 00:13:44.437419 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-01-05 00:13:45.342852 | orchestrator | changed: [testbed-manager] 2026-01-05 00:13:45.342917 | orchestrator | 2026-01-05 00:13:45.342926 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-01-05 00:13:46.289796 | orchestrator | changed: [testbed-manager] 2026-01-05 00:13:46.289869 | orchestrator | 2026-01-05 00:13:46.289878 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-01-05 00:13:46.866188 | orchestrator | changed: [testbed-manager] 2026-01-05 00:13:46.867122 | orchestrator | 2026-01-05 00:13:46.867141 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-01-05 00:13:46.910179 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-01-05 00:13:46.910275 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-01-05 00:13:46.910284 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-01-05 00:13:46.910291 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-01-05 00:13:49.052357 | orchestrator | changed: [testbed-manager] 2026-01-05 00:13:49.052413 | orchestrator | 2026-01-05 00:13:49.052422 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-01-05 00:13:58.472181 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-01-05 00:13:58.472251 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-01-05 00:13:58.472266 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-01-05 00:13:58.472276 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-01-05 00:13:58.472293 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-01-05 00:13:58.472303 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-01-05 00:13:58.472312 | orchestrator | 2026-01-05 00:13:58.472323 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-01-05 00:13:59.545523 | orchestrator | changed: [testbed-manager] 2026-01-05 00:13:59.545635 | orchestrator | 2026-01-05 00:13:59.545699 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-01-05 00:13:59.590588 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:13:59.590710 | orchestrator | 2026-01-05 00:13:59.590726 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-01-05 00:14:02.841955 | orchestrator | changed: [testbed-manager] 2026-01-05 00:14:02.842090 | orchestrator | 2026-01-05 00:14:02.842107 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-01-05 00:14:02.886353 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:14:02.886432 | orchestrator | 2026-01-05 00:14:02.886443 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-01-05 00:15:46.440828 | orchestrator | changed: [testbed-manager] 2026-01-05 00:15:46.441014 | orchestrator | 2026-01-05 00:15:46.441051 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-05 00:15:47.634897 | orchestrator | ok: [testbed-manager] 2026-01-05 00:15:47.634940 | orchestrator | 2026-01-05 00:15:47.634961 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:15:47.634968 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-01-05 00:15:47.634973 | orchestrator | 2026-01-05 00:15:47.876622 | orchestrator | ok: Runtime: 0:02:28.683367 2026-01-05 00:15:47.896310 | 2026-01-05 00:15:47.896542 | TASK [Reboot manager] 2026-01-05 00:15:49.439299 | orchestrator | ok: Runtime: 0:00:01.023881 2026-01-05 00:15:49.451476 | 2026-01-05 00:15:49.451669 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-01-05 00:16:05.865896 | orchestrator | ok 2026-01-05 00:16:05.877663 | 2026-01-05 00:16:05.877809 | TASK [Wait a little longer for the manager so that everything is ready] 2026-01-05 00:17:05.922507 | orchestrator | ok 2026-01-05 00:17:05.932764 | 2026-01-05 00:17:05.932911 | TASK [Deploy manager + bootstrap nodes] 2026-01-05 00:17:08.822827 | orchestrator | 2026-01-05 00:17:08.823044 | orchestrator | # DEPLOY MANAGER 2026-01-05 00:17:08.823069 | orchestrator | 2026-01-05 00:17:08.823083 | orchestrator | + set -e 2026-01-05 00:17:08.823097 | orchestrator | + echo 2026-01-05 00:17:08.823111 | orchestrator | + echo '# DEPLOY MANAGER' 2026-01-05 00:17:08.823165 | orchestrator | + echo 2026-01-05 00:17:08.823219 | orchestrator | + cat /opt/manager-vars.sh 2026-01-05 00:17:08.825922 | orchestrator | export NUMBER_OF_NODES=6 2026-01-05 00:17:08.825977 | orchestrator | 2026-01-05 00:17:08.825991 | orchestrator | export CEPH_VERSION=reef 2026-01-05 00:17:08.826007 | orchestrator | export CONFIGURATION_VERSION=main 2026-01-05 00:17:08.826078 | orchestrator | export MANAGER_VERSION=9.5.0 2026-01-05 00:17:08.826103 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-01-05 00:17:08.826115 | orchestrator | 2026-01-05 00:17:08.826154 | orchestrator | export ARA=false 2026-01-05 00:17:08.826167 | orchestrator | export DEPLOY_MODE=manager 2026-01-05 00:17:08.826185 | orchestrator | export TEMPEST=false 2026-01-05 00:17:08.826197 | orchestrator | export IS_ZUUL=true 2026-01-05 00:17:08.826208 | orchestrator | 2026-01-05 00:17:08.826227 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.95 2026-01-05 00:17:08.826239 | orchestrator | export EXTERNAL_API=false 2026-01-05 00:17:08.826251 | orchestrator | 2026-01-05 00:17:08.826262 | orchestrator | export IMAGE_USER=ubuntu 2026-01-05 00:17:08.826277 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-01-05 00:17:08.826288 | orchestrator | 2026-01-05 00:17:08.826300 | orchestrator | export CEPH_STACK=ceph-ansible 2026-01-05 00:17:08.826319 | orchestrator | 2026-01-05 00:17:08.826331 | orchestrator | + echo 2026-01-05 00:17:08.826344 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-05 00:17:08.827317 | orchestrator | ++ export INTERACTIVE=false 2026-01-05 00:17:08.827335 | orchestrator | ++ INTERACTIVE=false 2026-01-05 00:17:08.827348 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-05 00:17:08.827363 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-05 00:17:08.827451 | orchestrator | + source /opt/manager-vars.sh 2026-01-05 00:17:08.827466 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-05 00:17:08.827478 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-05 00:17:08.827489 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-05 00:17:08.827500 | orchestrator | ++ CEPH_VERSION=reef 2026-01-05 00:17:08.827515 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-05 00:17:08.827527 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-05 00:17:08.827538 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-01-05 00:17:08.827549 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-01-05 00:17:08.827560 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-05 00:17:08.827581 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-05 00:17:08.827593 | orchestrator | ++ export ARA=false 2026-01-05 00:17:08.827604 | orchestrator | ++ ARA=false 2026-01-05 00:17:08.827616 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-05 00:17:08.827627 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-05 00:17:08.827641 | orchestrator | ++ export TEMPEST=false 2026-01-05 00:17:08.827653 | orchestrator | ++ TEMPEST=false 2026-01-05 00:17:08.827663 | orchestrator | ++ export IS_ZUUL=true 2026-01-05 00:17:08.827674 | orchestrator | ++ IS_ZUUL=true 2026-01-05 00:17:08.827686 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.95 2026-01-05 00:17:08.827697 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.95 2026-01-05 00:17:08.827708 | orchestrator | ++ export EXTERNAL_API=false 2026-01-05 00:17:08.827718 | orchestrator | ++ EXTERNAL_API=false 2026-01-05 00:17:08.827729 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-05 00:17:08.827740 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-05 00:17:08.827752 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-05 00:17:08.827763 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-05 00:17:08.827778 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-05 00:17:08.827790 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-05 00:17:08.827801 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-01-05 00:17:08.884821 | orchestrator | + docker version 2026-01-05 00:17:09.190405 | orchestrator | Client: Docker Engine - Community 2026-01-05 00:17:09.190526 | orchestrator | Version: 27.5.1 2026-01-05 00:17:09.190542 | orchestrator | API version: 1.47 2026-01-05 00:17:09.190552 | orchestrator | Go version: go1.22.11 2026-01-05 00:17:09.190562 | orchestrator | Git commit: 9f9e405 2026-01-05 00:17:09.190573 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-01-05 00:17:09.190585 | orchestrator | OS/Arch: linux/amd64 2026-01-05 00:17:09.190595 | orchestrator | Context: default 2026-01-05 00:17:09.190605 | orchestrator | 2026-01-05 00:17:09.190615 | orchestrator | Server: Docker Engine - Community 2026-01-05 00:17:09.190625 | orchestrator | Engine: 2026-01-05 00:17:09.190636 | orchestrator | Version: 27.5.1 2026-01-05 00:17:09.190646 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-01-05 00:17:09.190693 | orchestrator | Go version: go1.22.11 2026-01-05 00:17:09.190703 | orchestrator | Git commit: 4c9b3b0 2026-01-05 00:17:09.190713 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-01-05 00:17:09.190723 | orchestrator | OS/Arch: linux/amd64 2026-01-05 00:17:09.190733 | orchestrator | Experimental: false 2026-01-05 00:17:09.190743 | orchestrator | containerd: 2026-01-05 00:17:09.190766 | orchestrator | Version: v2.2.1 2026-01-05 00:17:09.190777 | orchestrator | GitCommit: dea7da592f5d1d2b7755e3a161be07f43fad8f75 2026-01-05 00:17:09.190788 | orchestrator | runc: 2026-01-05 00:17:09.190797 | orchestrator | Version: 1.3.4 2026-01-05 00:17:09.190807 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-01-05 00:17:09.190817 | orchestrator | docker-init: 2026-01-05 00:17:09.190827 | orchestrator | Version: 0.19.0 2026-01-05 00:17:09.190837 | orchestrator | GitCommit: de40ad0 2026-01-05 00:17:09.195417 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-01-05 00:17:09.205357 | orchestrator | + set -e 2026-01-05 00:17:09.205376 | orchestrator | + source /opt/manager-vars.sh 2026-01-05 00:17:09.205389 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-05 00:17:09.205399 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-05 00:17:09.205409 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-05 00:17:09.205419 | orchestrator | ++ CEPH_VERSION=reef 2026-01-05 00:17:09.205429 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-05 00:17:09.205439 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-05 00:17:09.205449 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-01-05 00:17:09.205458 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-01-05 00:17:09.205468 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-05 00:17:09.205478 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-05 00:17:09.205488 | orchestrator | ++ export ARA=false 2026-01-05 00:17:09.205498 | orchestrator | ++ ARA=false 2026-01-05 00:17:09.205508 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-05 00:17:09.205517 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-05 00:17:09.205527 | orchestrator | ++ export TEMPEST=false 2026-01-05 00:17:09.205537 | orchestrator | ++ TEMPEST=false 2026-01-05 00:17:09.205547 | orchestrator | ++ export IS_ZUUL=true 2026-01-05 00:17:09.205557 | orchestrator | ++ IS_ZUUL=true 2026-01-05 00:17:09.205566 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.95 2026-01-05 00:17:09.205576 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.95 2026-01-05 00:17:09.205586 | orchestrator | ++ export EXTERNAL_API=false 2026-01-05 00:17:09.205596 | orchestrator | ++ EXTERNAL_API=false 2026-01-05 00:17:09.205605 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-05 00:17:09.205615 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-05 00:17:09.205625 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-05 00:17:09.205635 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-05 00:17:09.205645 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-05 00:17:09.205654 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-05 00:17:09.205664 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-05 00:17:09.205674 | orchestrator | ++ export INTERACTIVE=false 2026-01-05 00:17:09.205683 | orchestrator | ++ INTERACTIVE=false 2026-01-05 00:17:09.205693 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-05 00:17:09.205708 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-05 00:17:09.205721 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-01-05 00:17:09.205732 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.5.0 2026-01-05 00:17:09.211110 | orchestrator | + set -e 2026-01-05 00:17:09.211151 | orchestrator | + VERSION=9.5.0 2026-01-05 00:17:09.211163 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.5.0/g' /opt/configuration/environments/manager/configuration.yml 2026-01-05 00:17:09.218798 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-01-05 00:17:09.218817 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-01-05 00:17:09.222858 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-01-05 00:17:09.225752 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-01-05 00:17:09.233088 | orchestrator | /opt/configuration ~ 2026-01-05 00:17:09.233111 | orchestrator | + set -e 2026-01-05 00:17:09.233122 | orchestrator | + pushd /opt/configuration 2026-01-05 00:17:09.233157 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-05 00:17:09.234524 | orchestrator | + source /opt/venv/bin/activate 2026-01-05 00:17:09.236480 | orchestrator | ++ deactivate nondestructive 2026-01-05 00:17:09.236495 | orchestrator | ++ '[' -n '' ']' 2026-01-05 00:17:09.236509 | orchestrator | ++ '[' -n '' ']' 2026-01-05 00:17:09.236533 | orchestrator | ++ hash -r 2026-01-05 00:17:09.236543 | orchestrator | ++ '[' -n '' ']' 2026-01-05 00:17:09.236553 | orchestrator | ++ unset VIRTUAL_ENV 2026-01-05 00:17:09.236563 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-01-05 00:17:09.236572 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-01-05 00:17:09.236582 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-01-05 00:17:09.236592 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-01-05 00:17:09.236601 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-01-05 00:17:09.236611 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-01-05 00:17:09.236622 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-05 00:17:09.236632 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-05 00:17:09.236642 | orchestrator | ++ export PATH 2026-01-05 00:17:09.236652 | orchestrator | ++ '[' -n '' ']' 2026-01-05 00:17:09.236662 | orchestrator | ++ '[' -z '' ']' 2026-01-05 00:17:09.236672 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-01-05 00:17:09.236681 | orchestrator | ++ PS1='(venv) ' 2026-01-05 00:17:09.236691 | orchestrator | ++ export PS1 2026-01-05 00:17:09.236701 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-01-05 00:17:09.236714 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-01-05 00:17:09.236724 | orchestrator | ++ hash -r 2026-01-05 00:17:09.236734 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-01-05 00:17:10.425826 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-01-05 00:17:10.426698 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-01-05 00:17:10.428660 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-01-05 00:17:10.430501 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-01-05 00:17:10.432313 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (25.0) 2026-01-05 00:17:10.443011 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-01-05 00:17:10.444525 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-01-05 00:17:10.445727 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-01-05 00:17:10.447349 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-01-05 00:17:10.481182 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.4) 2026-01-05 00:17:10.482676 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-01-05 00:17:10.485201 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.2) 2026-01-05 00:17:10.485967 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.1.4) 2026-01-05 00:17:10.490079 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-01-05 00:17:10.712625 | orchestrator | ++ which gilt 2026-01-05 00:17:10.715712 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-01-05 00:17:10.715783 | orchestrator | + /opt/venv/bin/gilt overlay 2026-01-05 00:17:10.952434 | orchestrator | osism.cfg-generics: 2026-01-05 00:17:11.143774 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-01-05 00:17:11.143907 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-01-05 00:17:11.143924 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-01-05 00:17:11.143939 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-01-05 00:17:11.897711 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-01-05 00:17:11.910579 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-01-05 00:17:12.394851 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-01-05 00:17:12.444421 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-05 00:17:12.444525 | orchestrator | + deactivate 2026-01-05 00:17:12.444541 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-01-05 00:17:12.444554 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-05 00:17:12.444565 | orchestrator | + export PATH 2026-01-05 00:17:12.444575 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-01-05 00:17:12.444586 | orchestrator | + '[' -n '' ']' 2026-01-05 00:17:12.444598 | orchestrator | + hash -r 2026-01-05 00:17:12.444608 | orchestrator | + '[' -n '' ']' 2026-01-05 00:17:12.444618 | orchestrator | + unset VIRTUAL_ENV 2026-01-05 00:17:12.444628 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-01-05 00:17:12.444638 | orchestrator | ~ 2026-01-05 00:17:12.444648 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-01-05 00:17:12.444657 | orchestrator | + unset -f deactivate 2026-01-05 00:17:12.444668 | orchestrator | + popd 2026-01-05 00:17:12.446288 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-01-05 00:17:12.446366 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-01-05 00:17:12.447327 | orchestrator | ++ semver 9.5.0 7.0.0 2026-01-05 00:17:12.520936 | orchestrator | + [[ 1 -ge 0 ]] 2026-01-05 00:17:12.521027 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-01-05 00:17:12.522399 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-01-05 00:17:12.586829 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-05 00:17:12.587638 | orchestrator | ++ semver 2024.2 2025.1 2026-01-05 00:17:12.648513 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-05 00:17:12.648605 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-01-05 00:17:12.745342 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-05 00:17:12.745495 | orchestrator | + source /opt/venv/bin/activate 2026-01-05 00:17:12.745522 | orchestrator | ++ deactivate nondestructive 2026-01-05 00:17:12.745562 | orchestrator | ++ '[' -n '' ']' 2026-01-05 00:17:12.745584 | orchestrator | ++ '[' -n '' ']' 2026-01-05 00:17:12.745604 | orchestrator | ++ hash -r 2026-01-05 00:17:12.745623 | orchestrator | ++ '[' -n '' ']' 2026-01-05 00:17:12.745643 | orchestrator | ++ unset VIRTUAL_ENV 2026-01-05 00:17:12.745663 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-01-05 00:17:12.745683 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-01-05 00:17:12.745774 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-01-05 00:17:12.745800 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-01-05 00:17:12.745813 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-01-05 00:17:12.745828 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-01-05 00:17:12.745993 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-05 00:17:12.746081 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-05 00:17:12.746335 | orchestrator | ++ export PATH 2026-01-05 00:17:12.746353 | orchestrator | ++ '[' -n '' ']' 2026-01-05 00:17:12.746365 | orchestrator | ++ '[' -z '' ']' 2026-01-05 00:17:12.746376 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-01-05 00:17:12.746392 | orchestrator | ++ PS1='(venv) ' 2026-01-05 00:17:12.746403 | orchestrator | ++ export PS1 2026-01-05 00:17:12.746414 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-01-05 00:17:12.746425 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-01-05 00:17:12.746437 | orchestrator | ++ hash -r 2026-01-05 00:17:12.746502 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-01-05 00:17:14.021528 | orchestrator | 2026-01-05 00:17:14.021659 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-01-05 00:17:14.021676 | orchestrator | 2026-01-05 00:17:14.021688 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-05 00:17:14.637771 | orchestrator | ok: [testbed-manager] 2026-01-05 00:17:14.637934 | orchestrator | 2026-01-05 00:17:14.637967 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-01-05 00:17:15.693780 | orchestrator | changed: [testbed-manager] 2026-01-05 00:17:15.693909 | orchestrator | 2026-01-05 00:17:15.693925 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-01-05 00:17:15.693979 | orchestrator | 2026-01-05 00:17:15.693992 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-05 00:17:18.076139 | orchestrator | ok: [testbed-manager] 2026-01-05 00:17:18.076294 | orchestrator | 2026-01-05 00:17:18.076311 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-01-05 00:17:18.126956 | orchestrator | ok: [testbed-manager] 2026-01-05 00:17:18.127075 | orchestrator | 2026-01-05 00:17:18.127091 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-01-05 00:17:18.626780 | orchestrator | changed: [testbed-manager] 2026-01-05 00:17:18.626949 | orchestrator | 2026-01-05 00:17:18.626977 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-01-05 00:17:18.676259 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:17:18.676376 | orchestrator | 2026-01-05 00:17:18.676391 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-01-05 00:17:19.033855 | orchestrator | changed: [testbed-manager] 2026-01-05 00:17:19.033970 | orchestrator | 2026-01-05 00:17:19.033988 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2026-01-05 00:17:19.097121 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:17:19.097264 | orchestrator | 2026-01-05 00:17:19.097289 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-01-05 00:17:19.462389 | orchestrator | ok: [testbed-manager] 2026-01-05 00:17:19.462495 | orchestrator | 2026-01-05 00:17:19.462510 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-01-05 00:17:19.593711 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:17:19.593818 | orchestrator | 2026-01-05 00:17:19.593833 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-01-05 00:17:19.593847 | orchestrator | 2026-01-05 00:17:19.593858 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-05 00:17:21.495992 | orchestrator | ok: [testbed-manager] 2026-01-05 00:17:21.496109 | orchestrator | 2026-01-05 00:17:21.496127 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-01-05 00:17:21.602565 | orchestrator | included: osism.services.traefik for testbed-manager 2026-01-05 00:17:21.602663 | orchestrator | 2026-01-05 00:17:21.602677 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-01-05 00:17:21.673024 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-01-05 00:17:21.673123 | orchestrator | 2026-01-05 00:17:21.673139 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-01-05 00:17:22.846565 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-01-05 00:17:22.846649 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-01-05 00:17:22.846658 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-01-05 00:17:22.846662 | orchestrator | 2026-01-05 00:17:22.846667 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-01-05 00:17:24.736635 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-01-05 00:17:24.736748 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-01-05 00:17:24.736764 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-01-05 00:17:24.736777 | orchestrator | 2026-01-05 00:17:24.736790 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-01-05 00:17:25.565707 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-05 00:17:25.565823 | orchestrator | changed: [testbed-manager] 2026-01-05 00:17:25.565842 | orchestrator | 2026-01-05 00:17:25.565855 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-01-05 00:17:26.265214 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-05 00:17:26.265346 | orchestrator | changed: [testbed-manager] 2026-01-05 00:17:26.265366 | orchestrator | 2026-01-05 00:17:26.265379 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-01-05 00:17:26.328857 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:17:26.328966 | orchestrator | 2026-01-05 00:17:26.328982 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-01-05 00:17:26.709588 | orchestrator | ok: [testbed-manager] 2026-01-05 00:17:26.709689 | orchestrator | 2026-01-05 00:17:26.709702 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-01-05 00:17:26.806664 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-01-05 00:17:26.806772 | orchestrator | 2026-01-05 00:17:26.806787 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-01-05 00:17:27.974325 | orchestrator | changed: [testbed-manager] 2026-01-05 00:17:27.974446 | orchestrator | 2026-01-05 00:17:27.974465 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-01-05 00:17:28.910511 | orchestrator | changed: [testbed-manager] 2026-01-05 00:17:28.910624 | orchestrator | 2026-01-05 00:17:28.910641 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-01-05 00:17:38.808756 | orchestrator | changed: [testbed-manager] 2026-01-05 00:17:38.808885 | orchestrator | 2026-01-05 00:17:38.808938 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-01-05 00:17:38.880636 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:17:38.880737 | orchestrator | 2026-01-05 00:17:38.880752 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-01-05 00:17:38.880764 | orchestrator | 2026-01-05 00:17:38.880776 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-05 00:17:40.872381 | orchestrator | ok: [testbed-manager] 2026-01-05 00:17:40.872499 | orchestrator | 2026-01-05 00:17:40.872517 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-01-05 00:17:40.989797 | orchestrator | included: osism.services.manager for testbed-manager 2026-01-05 00:17:40.989921 | orchestrator | 2026-01-05 00:17:40.989938 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-01-05 00:17:41.059057 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-01-05 00:17:41.059143 | orchestrator | 2026-01-05 00:17:41.059152 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-01-05 00:17:43.928881 | orchestrator | ok: [testbed-manager] 2026-01-05 00:17:43.929030 | orchestrator | 2026-01-05 00:17:43.929048 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-01-05 00:17:43.984889 | orchestrator | ok: [testbed-manager] 2026-01-05 00:17:43.984995 | orchestrator | 2026-01-05 00:17:43.985011 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-01-05 00:17:44.105383 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-01-05 00:17:44.105490 | orchestrator | 2026-01-05 00:17:44.105506 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-01-05 00:17:46.806595 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-01-05 00:17:46.806726 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-01-05 00:17:46.806745 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-01-05 00:17:46.806766 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-01-05 00:17:46.806784 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-01-05 00:17:46.806806 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-01-05 00:17:46.806826 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-01-05 00:17:46.806841 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-01-05 00:17:46.806853 | orchestrator | 2026-01-05 00:17:46.806868 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-01-05 00:17:47.383462 | orchestrator | changed: [testbed-manager] 2026-01-05 00:17:47.383589 | orchestrator | 2026-01-05 00:17:47.383607 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-01-05 00:17:47.965484 | orchestrator | changed: [testbed-manager] 2026-01-05 00:17:47.965604 | orchestrator | 2026-01-05 00:17:47.965620 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-01-05 00:17:48.041863 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-01-05 00:17:48.042066 | orchestrator | 2026-01-05 00:17:48.042085 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-01-05 00:17:49.304757 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-01-05 00:17:49.304883 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-01-05 00:17:49.304898 | orchestrator | 2026-01-05 00:17:49.304912 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-01-05 00:17:49.990853 | orchestrator | changed: [testbed-manager] 2026-01-05 00:17:49.990972 | orchestrator | 2026-01-05 00:17:49.990988 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-01-05 00:17:50.046962 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:17:50.047091 | orchestrator | 2026-01-05 00:17:50.047115 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-01-05 00:17:50.130113 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-01-05 00:17:50.130265 | orchestrator | 2026-01-05 00:17:50.130281 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-01-05 00:17:50.802926 | orchestrator | changed: [testbed-manager] 2026-01-05 00:17:50.803046 | orchestrator | 2026-01-05 00:17:50.803061 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-01-05 00:17:50.869363 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-01-05 00:17:50.869481 | orchestrator | 2026-01-05 00:17:50.869497 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-01-05 00:17:52.313143 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-05 00:17:52.313342 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-05 00:17:52.313360 | orchestrator | changed: [testbed-manager] 2026-01-05 00:17:52.313374 | orchestrator | 2026-01-05 00:17:52.313386 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-01-05 00:17:53.015154 | orchestrator | changed: [testbed-manager] 2026-01-05 00:17:53.015321 | orchestrator | 2026-01-05 00:17:53.015344 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-01-05 00:17:53.072777 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:17:53.072877 | orchestrator | 2026-01-05 00:17:53.072907 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-01-05 00:17:53.177490 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-01-05 00:17:53.177597 | orchestrator | 2026-01-05 00:17:53.177609 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-01-05 00:17:53.745921 | orchestrator | changed: [testbed-manager] 2026-01-05 00:17:53.746091 | orchestrator | 2026-01-05 00:17:53.746109 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-01-05 00:17:54.198311 | orchestrator | changed: [testbed-manager] 2026-01-05 00:17:54.198429 | orchestrator | 2026-01-05 00:17:54.198443 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-01-05 00:17:55.586359 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-01-05 00:17:55.586485 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-01-05 00:17:55.586500 | orchestrator | 2026-01-05 00:17:55.586514 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-01-05 00:17:56.297917 | orchestrator | changed: [testbed-manager] 2026-01-05 00:17:56.298091 | orchestrator | 2026-01-05 00:17:56.298109 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-01-05 00:17:56.729732 | orchestrator | ok: [testbed-manager] 2026-01-05 00:17:56.729837 | orchestrator | 2026-01-05 00:17:56.729847 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-01-05 00:17:57.104375 | orchestrator | changed: [testbed-manager] 2026-01-05 00:17:57.104496 | orchestrator | 2026-01-05 00:17:57.104513 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-01-05 00:17:57.153486 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:17:57.153636 | orchestrator | 2026-01-05 00:17:57.153653 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-01-05 00:17:57.229370 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-01-05 00:17:57.229490 | orchestrator | 2026-01-05 00:17:57.229508 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-01-05 00:17:57.279669 | orchestrator | ok: [testbed-manager] 2026-01-05 00:17:57.279770 | orchestrator | 2026-01-05 00:17:57.279788 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-01-05 00:17:59.211691 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-01-05 00:17:59.211776 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-01-05 00:17:59.211783 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-01-05 00:17:59.211787 | orchestrator | 2026-01-05 00:17:59.211792 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-01-05 00:17:59.901106 | orchestrator | changed: [testbed-manager] 2026-01-05 00:17:59.901270 | orchestrator | 2026-01-05 00:17:59.901288 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-01-05 00:18:00.596502 | orchestrator | changed: [testbed-manager] 2026-01-05 00:18:00.596595 | orchestrator | 2026-01-05 00:18:00.596607 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-01-05 00:18:01.284280 | orchestrator | changed: [testbed-manager] 2026-01-05 00:18:01.284391 | orchestrator | 2026-01-05 00:18:01.284407 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-01-05 00:18:01.355245 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-01-05 00:18:01.355352 | orchestrator | 2026-01-05 00:18:01.355367 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-01-05 00:18:01.408804 | orchestrator | ok: [testbed-manager] 2026-01-05 00:18:01.408883 | orchestrator | 2026-01-05 00:18:01.408896 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-01-05 00:18:02.071194 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-01-05 00:18:02.071367 | orchestrator | 2026-01-05 00:18:02.071384 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-01-05 00:18:02.146149 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-01-05 00:18:02.146298 | orchestrator | 2026-01-05 00:18:02.146316 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-01-05 00:18:02.801678 | orchestrator | changed: [testbed-manager] 2026-01-05 00:18:02.801794 | orchestrator | 2026-01-05 00:18:02.801812 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-01-05 00:18:03.362062 | orchestrator | ok: [testbed-manager] 2026-01-05 00:18:03.362169 | orchestrator | 2026-01-05 00:18:03.362184 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-01-05 00:18:03.419461 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:18:03.419568 | orchestrator | 2026-01-05 00:18:03.419584 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-01-05 00:18:03.467102 | orchestrator | ok: [testbed-manager] 2026-01-05 00:18:03.467209 | orchestrator | 2026-01-05 00:18:03.467226 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-01-05 00:18:04.224547 | orchestrator | changed: [testbed-manager] 2026-01-05 00:18:04.224670 | orchestrator | 2026-01-05 00:18:04.224685 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-01-05 00:19:16.694286 | orchestrator | changed: [testbed-manager] 2026-01-05 00:19:16.694440 | orchestrator | 2026-01-05 00:19:16.694463 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-01-05 00:19:17.712296 | orchestrator | ok: [testbed-manager] 2026-01-05 00:19:17.712422 | orchestrator | 2026-01-05 00:19:17.712449 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-01-05 00:19:17.765422 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:19:17.765557 | orchestrator | 2026-01-05 00:19:17.765582 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-01-05 00:19:20.315904 | orchestrator | changed: [testbed-manager] 2026-01-05 00:19:20.316048 | orchestrator | 2026-01-05 00:19:20.316068 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-01-05 00:19:20.412113 | orchestrator | ok: [testbed-manager] 2026-01-05 00:19:20.412210 | orchestrator | 2026-01-05 00:19:20.412219 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-01-05 00:19:20.412226 | orchestrator | 2026-01-05 00:19:20.412257 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-01-05 00:19:20.488807 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:19:20.488905 | orchestrator | 2026-01-05 00:19:20.488918 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-01-05 00:20:20.545472 | orchestrator | Pausing for 60 seconds 2026-01-05 00:20:20.545614 | orchestrator | changed: [testbed-manager] 2026-01-05 00:20:20.545633 | orchestrator | 2026-01-05 00:20:20.545647 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-01-05 00:20:23.709021 | orchestrator | changed: [testbed-manager] 2026-01-05 00:20:23.709143 | orchestrator | 2026-01-05 00:20:23.709160 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-01-05 00:21:25.866203 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-01-05 00:21:25.866369 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-01-05 00:21:25.866479 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-01-05 00:21:25.866494 | orchestrator | changed: [testbed-manager] 2026-01-05 00:21:25.866509 | orchestrator | 2026-01-05 00:21:25.866522 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-01-05 00:21:36.161254 | orchestrator | changed: [testbed-manager] 2026-01-05 00:21:36.161359 | orchestrator | 2026-01-05 00:21:36.161372 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-01-05 00:21:36.246298 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-01-05 00:21:36.246470 | orchestrator | 2026-01-05 00:21:36.246487 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-01-05 00:21:36.246499 | orchestrator | 2026-01-05 00:21:36.246515 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-01-05 00:21:36.294486 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:21:36.294626 | orchestrator | 2026-01-05 00:21:36.294656 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-01-05 00:21:36.365489 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-01-05 00:21:36.365586 | orchestrator | 2026-01-05 00:21:36.365596 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-01-05 00:21:37.174085 | orchestrator | changed: [testbed-manager] 2026-01-05 00:21:37.174209 | orchestrator | 2026-01-05 00:21:37.174227 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-01-05 00:21:40.572849 | orchestrator | ok: [testbed-manager] 2026-01-05 00:21:40.572972 | orchestrator | 2026-01-05 00:21:40.572990 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-01-05 00:21:40.649831 | orchestrator | ok: [testbed-manager] => { 2026-01-05 00:21:40.649960 | orchestrator | "version_check_result.stdout_lines": [ 2026-01-05 00:21:40.649988 | orchestrator | "=== OSISM Container Version Check ===", 2026-01-05 00:21:40.650009 | orchestrator | "Checking running containers against expected versions...", 2026-01-05 00:21:40.650084 | orchestrator | "", 2026-01-05 00:21:40.650097 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-01-05 00:21:40.650109 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-01-05 00:21:40.650121 | orchestrator | " Enabled: true", 2026-01-05 00:21:40.650133 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-01-05 00:21:40.650172 | orchestrator | " Status: ✅ MATCH", 2026-01-05 00:21:40.650183 | orchestrator | "", 2026-01-05 00:21:40.650195 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-01-05 00:21:40.650207 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-01-05 00:21:40.650218 | orchestrator | " Enabled: true", 2026-01-05 00:21:40.650229 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-01-05 00:21:40.650240 | orchestrator | " Status: ✅ MATCH", 2026-01-05 00:21:40.650251 | orchestrator | "", 2026-01-05 00:21:40.650261 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-01-05 00:21:40.650272 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-01-05 00:21:40.650283 | orchestrator | " Enabled: true", 2026-01-05 00:21:40.650294 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-01-05 00:21:40.650305 | orchestrator | " Status: ✅ MATCH", 2026-01-05 00:21:40.650315 | orchestrator | "", 2026-01-05 00:21:40.650326 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-01-05 00:21:40.650337 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-01-05 00:21:40.650349 | orchestrator | " Enabled: true", 2026-01-05 00:21:40.650363 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-01-05 00:21:40.650376 | orchestrator | " Status: ✅ MATCH", 2026-01-05 00:21:40.650388 | orchestrator | "", 2026-01-05 00:21:40.650437 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-01-05 00:21:40.650458 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-01-05 00:21:40.650478 | orchestrator | " Enabled: true", 2026-01-05 00:21:40.650493 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-01-05 00:21:40.650506 | orchestrator | " Status: ✅ MATCH", 2026-01-05 00:21:40.650519 | orchestrator | "", 2026-01-05 00:21:40.650531 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-01-05 00:21:40.650544 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-05 00:21:40.650556 | orchestrator | " Enabled: true", 2026-01-05 00:21:40.650569 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-05 00:21:40.650581 | orchestrator | " Status: ✅ MATCH", 2026-01-05 00:21:40.650594 | orchestrator | "", 2026-01-05 00:21:40.650607 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-01-05 00:21:40.650619 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-01-05 00:21:40.650631 | orchestrator | " Enabled: true", 2026-01-05 00:21:40.650643 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-01-05 00:21:40.650655 | orchestrator | " Status: ✅ MATCH", 2026-01-05 00:21:40.650668 | orchestrator | "", 2026-01-05 00:21:40.650681 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-01-05 00:21:40.650693 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-01-05 00:21:40.650711 | orchestrator | " Enabled: true", 2026-01-05 00:21:40.650729 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-01-05 00:21:40.650748 | orchestrator | " Status: ✅ MATCH", 2026-01-05 00:21:40.650765 | orchestrator | "", 2026-01-05 00:21:40.650784 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-01-05 00:21:40.650803 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-01-05 00:21:40.650822 | orchestrator | " Enabled: true", 2026-01-05 00:21:40.650842 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-01-05 00:21:40.650860 | orchestrator | " Status: ✅ MATCH", 2026-01-05 00:21:40.650880 | orchestrator | "", 2026-01-05 00:21:40.650892 | orchestrator | "Checking service: redis (Redis Cache)", 2026-01-05 00:21:40.650903 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-01-05 00:21:40.650913 | orchestrator | " Enabled: true", 2026-01-05 00:21:40.650925 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-01-05 00:21:40.650948 | orchestrator | " Status: ✅ MATCH", 2026-01-05 00:21:40.650959 | orchestrator | "", 2026-01-05 00:21:40.650970 | orchestrator | "Checking service: api (OSISM API Service)", 2026-01-05 00:21:40.650980 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-05 00:21:40.650992 | orchestrator | " Enabled: true", 2026-01-05 00:21:40.651002 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-05 00:21:40.651013 | orchestrator | " Status: ✅ MATCH", 2026-01-05 00:21:40.651024 | orchestrator | "", 2026-01-05 00:21:40.651035 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-01-05 00:21:40.651046 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-05 00:21:40.651058 | orchestrator | " Enabled: true", 2026-01-05 00:21:40.651069 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-05 00:21:40.651080 | orchestrator | " Status: ✅ MATCH", 2026-01-05 00:21:40.651091 | orchestrator | "", 2026-01-05 00:21:40.651102 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-01-05 00:21:40.651112 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-05 00:21:40.651123 | orchestrator | " Enabled: true", 2026-01-05 00:21:40.651134 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-05 00:21:40.651145 | orchestrator | " Status: ✅ MATCH", 2026-01-05 00:21:40.651156 | orchestrator | "", 2026-01-05 00:21:40.651167 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-01-05 00:21:40.651178 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-05 00:21:40.651189 | orchestrator | " Enabled: true", 2026-01-05 00:21:40.651200 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-05 00:21:40.651234 | orchestrator | " Status: ✅ MATCH", 2026-01-05 00:21:40.651245 | orchestrator | "", 2026-01-05 00:21:40.651266 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-01-05 00:21:40.651277 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-05 00:21:40.651288 | orchestrator | " Enabled: true", 2026-01-05 00:21:40.651299 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-05 00:21:40.651309 | orchestrator | " Status: ✅ MATCH", 2026-01-05 00:21:40.651320 | orchestrator | "", 2026-01-05 00:21:40.651331 | orchestrator | "=== Summary ===", 2026-01-05 00:21:40.651342 | orchestrator | "Errors (version mismatches): 0", 2026-01-05 00:21:40.651353 | orchestrator | "Warnings (expected containers not running): 0", 2026-01-05 00:21:40.651363 | orchestrator | "", 2026-01-05 00:21:40.651374 | orchestrator | "✅ All running containers match expected versions!" 2026-01-05 00:21:40.651385 | orchestrator | ] 2026-01-05 00:21:40.651435 | orchestrator | } 2026-01-05 00:21:40.651446 | orchestrator | 2026-01-05 00:21:40.651458 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-01-05 00:21:40.709490 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:21:40.709593 | orchestrator | 2026-01-05 00:21:40.709608 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:21:40.709623 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2026-01-05 00:21:40.709635 | orchestrator | 2026-01-05 00:21:40.819700 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-05 00:21:40.819837 | orchestrator | + deactivate 2026-01-05 00:21:40.819862 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-01-05 00:21:40.819884 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-05 00:21:40.819902 | orchestrator | + export PATH 2026-01-05 00:21:40.819922 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-01-05 00:21:40.819942 | orchestrator | + '[' -n '' ']' 2026-01-05 00:21:40.819962 | orchestrator | + hash -r 2026-01-05 00:21:40.819981 | orchestrator | + '[' -n '' ']' 2026-01-05 00:21:40.819999 | orchestrator | + unset VIRTUAL_ENV 2026-01-05 00:21:40.820018 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-01-05 00:21:40.820037 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-01-05 00:21:40.820056 | orchestrator | + unset -f deactivate 2026-01-05 00:21:40.820187 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-01-05 00:21:40.829220 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-01-05 00:21:40.829275 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-01-05 00:21:40.829288 | orchestrator | + local max_attempts=60 2026-01-05 00:21:40.829301 | orchestrator | + local name=ceph-ansible 2026-01-05 00:21:40.829312 | orchestrator | + local attempt_num=1 2026-01-05 00:21:40.830922 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-05 00:21:40.874516 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-05 00:21:40.874624 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-01-05 00:21:40.874644 | orchestrator | + local max_attempts=60 2026-01-05 00:21:40.874665 | orchestrator | + local name=kolla-ansible 2026-01-05 00:21:40.874685 | orchestrator | + local attempt_num=1 2026-01-05 00:21:40.875272 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-01-05 00:21:40.912452 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-05 00:21:40.912574 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-01-05 00:21:40.912589 | orchestrator | + local max_attempts=60 2026-01-05 00:21:40.912602 | orchestrator | + local name=osism-ansible 2026-01-05 00:21:40.912614 | orchestrator | + local attempt_num=1 2026-01-05 00:21:40.912714 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-01-05 00:21:40.951733 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-05 00:21:40.951824 | orchestrator | + [[ true == \t\r\u\e ]] 2026-01-05 00:21:40.951837 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-01-05 00:21:41.707569 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-01-05 00:21:41.893950 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-01-05 00:21:41.894160 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-01-05 00:21:41.894178 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-01-05 00:21:41.894190 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-01-05 00:21:41.894203 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2026-01-05 00:21:41.894235 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-01-05 00:21:41.894247 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-01-05 00:21:41.894258 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-01-05 00:21:41.894269 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-01-05 00:21:41.894279 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-01-05 00:21:41.894290 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-01-05 00:21:41.894301 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-01-05 00:21:41.894341 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-01-05 00:21:41.894353 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-01-05 00:21:41.894364 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-01-05 00:21:41.894480 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-01-05 00:21:41.902054 | orchestrator | ++ semver 9.5.0 7.0.0 2026-01-05 00:21:41.970340 | orchestrator | + [[ 1 -ge 0 ]] 2026-01-05 00:21:41.970488 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-01-05 00:21:41.974922 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-01-05 00:21:54.430120 | orchestrator | 2026-01-05 00:21:54 | INFO  | Task a1fb8878-69e7-4821-8293-1971c64a23db (resolvconf) was prepared for execution. 2026-01-05 00:21:54.430269 | orchestrator | 2026-01-05 00:21:54 | INFO  | It takes a moment until task a1fb8878-69e7-4821-8293-1971c64a23db (resolvconf) has been started and output is visible here. 2026-01-05 00:22:09.998970 | orchestrator | 2026-01-05 00:22:09.999109 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-01-05 00:22:09.999126 | orchestrator | 2026-01-05 00:22:09.999139 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-05 00:22:09.999151 | orchestrator | Monday 05 January 2026 00:21:58 +0000 (0:00:00.141) 0:00:00.141 ******** 2026-01-05 00:22:09.999164 | orchestrator | ok: [testbed-manager] 2026-01-05 00:22:09.999176 | orchestrator | 2026-01-05 00:22:09.999188 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-01-05 00:22:09.999200 | orchestrator | Monday 05 January 2026 00:22:02 +0000 (0:00:03.886) 0:00:04.028 ******** 2026-01-05 00:22:09.999211 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:22:09.999224 | orchestrator | 2026-01-05 00:22:09.999236 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-01-05 00:22:09.999247 | orchestrator | Monday 05 January 2026 00:22:02 +0000 (0:00:00.071) 0:00:04.100 ******** 2026-01-05 00:22:09.999259 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-01-05 00:22:09.999272 | orchestrator | 2026-01-05 00:22:09.999283 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-01-05 00:22:09.999294 | orchestrator | Monday 05 January 2026 00:22:02 +0000 (0:00:00.078) 0:00:04.179 ******** 2026-01-05 00:22:09.999330 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-01-05 00:22:09.999342 | orchestrator | 2026-01-05 00:22:09.999353 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-01-05 00:22:09.999364 | orchestrator | Monday 05 January 2026 00:22:02 +0000 (0:00:00.087) 0:00:04.267 ******** 2026-01-05 00:22:09.999376 | orchestrator | ok: [testbed-manager] 2026-01-05 00:22:09.999387 | orchestrator | 2026-01-05 00:22:09.999398 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-01-05 00:22:09.999409 | orchestrator | Monday 05 January 2026 00:22:03 +0000 (0:00:01.193) 0:00:05.461 ******** 2026-01-05 00:22:09.999420 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:22:09.999462 | orchestrator | 2026-01-05 00:22:09.999474 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-01-05 00:22:09.999511 | orchestrator | Monday 05 January 2026 00:22:04 +0000 (0:00:00.067) 0:00:05.529 ******** 2026-01-05 00:22:09.999524 | orchestrator | ok: [testbed-manager] 2026-01-05 00:22:09.999536 | orchestrator | 2026-01-05 00:22:09.999549 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-01-05 00:22:09.999562 | orchestrator | Monday 05 January 2026 00:22:05 +0000 (0:00:01.554) 0:00:07.083 ******** 2026-01-05 00:22:09.999576 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:22:09.999588 | orchestrator | 2026-01-05 00:22:09.999601 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-01-05 00:22:09.999616 | orchestrator | Monday 05 January 2026 00:22:05 +0000 (0:00:00.086) 0:00:07.170 ******** 2026-01-05 00:22:09.999629 | orchestrator | changed: [testbed-manager] 2026-01-05 00:22:09.999642 | orchestrator | 2026-01-05 00:22:09.999655 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-01-05 00:22:09.999666 | orchestrator | Monday 05 January 2026 00:22:06 +0000 (0:00:00.583) 0:00:07.753 ******** 2026-01-05 00:22:09.999677 | orchestrator | changed: [testbed-manager] 2026-01-05 00:22:09.999688 | orchestrator | 2026-01-05 00:22:09.999700 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-01-05 00:22:09.999711 | orchestrator | Monday 05 January 2026 00:22:07 +0000 (0:00:01.169) 0:00:08.923 ******** 2026-01-05 00:22:09.999721 | orchestrator | ok: [testbed-manager] 2026-01-05 00:22:09.999732 | orchestrator | 2026-01-05 00:22:09.999743 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-01-05 00:22:09.999754 | orchestrator | Monday 05 January 2026 00:22:08 +0000 (0:00:01.018) 0:00:09.941 ******** 2026-01-05 00:22:09.999765 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-01-05 00:22:09.999776 | orchestrator | 2026-01-05 00:22:09.999787 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-01-05 00:22:09.999798 | orchestrator | Monday 05 January 2026 00:22:08 +0000 (0:00:00.088) 0:00:10.030 ******** 2026-01-05 00:22:09.999809 | orchestrator | changed: [testbed-manager] 2026-01-05 00:22:09.999820 | orchestrator | 2026-01-05 00:22:09.999831 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:22:09.999843 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-05 00:22:09.999854 | orchestrator | 2026-01-05 00:22:09.999865 | orchestrator | 2026-01-05 00:22:09.999876 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:22:09.999887 | orchestrator | Monday 05 January 2026 00:22:09 +0000 (0:00:01.214) 0:00:11.245 ******** 2026-01-05 00:22:09.999898 | orchestrator | =============================================================================== 2026-01-05 00:22:09.999909 | orchestrator | Gathering Facts --------------------------------------------------------- 3.89s 2026-01-05 00:22:09.999920 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 1.55s 2026-01-05 00:22:09.999930 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.21s 2026-01-05 00:22:09.999941 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.19s 2026-01-05 00:22:09.999952 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.17s 2026-01-05 00:22:09.999963 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.02s 2026-01-05 00:22:09.999992 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.58s 2026-01-05 00:22:10.000004 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2026-01-05 00:22:10.000016 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2026-01-05 00:22:10.000028 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.09s 2026-01-05 00:22:10.000045 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2026-01-05 00:22:10.000057 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2026-01-05 00:22:10.000068 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2026-01-05 00:22:10.320486 | orchestrator | + osism apply sshconfig 2026-01-05 00:22:22.360961 | orchestrator | 2026-01-05 00:22:22 | INFO  | Task dc16ce5e-0f01-422d-8530-248942c11502 (sshconfig) was prepared for execution. 2026-01-05 00:22:22.361157 | orchestrator | 2026-01-05 00:22:22 | INFO  | It takes a moment until task dc16ce5e-0f01-422d-8530-248942c11502 (sshconfig) has been started and output is visible here. 2026-01-05 00:22:33.965428 | orchestrator | 2026-01-05 00:22:33.965603 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-01-05 00:22:33.965621 | orchestrator | 2026-01-05 00:22:33.965660 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-01-05 00:22:33.965672 | orchestrator | Monday 05 January 2026 00:22:26 +0000 (0:00:00.151) 0:00:00.151 ******** 2026-01-05 00:22:33.965684 | orchestrator | ok: [testbed-manager] 2026-01-05 00:22:33.965696 | orchestrator | 2026-01-05 00:22:33.965707 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-01-05 00:22:33.965718 | orchestrator | Monday 05 January 2026 00:22:27 +0000 (0:00:00.520) 0:00:00.672 ******** 2026-01-05 00:22:33.965729 | orchestrator | changed: [testbed-manager] 2026-01-05 00:22:33.965741 | orchestrator | 2026-01-05 00:22:33.965752 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-01-05 00:22:33.965764 | orchestrator | Monday 05 January 2026 00:22:27 +0000 (0:00:00.499) 0:00:01.171 ******** 2026-01-05 00:22:33.965775 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-01-05 00:22:33.965786 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-01-05 00:22:33.965798 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-01-05 00:22:33.965808 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-01-05 00:22:33.965819 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-01-05 00:22:33.965830 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-01-05 00:22:33.965841 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-01-05 00:22:33.965851 | orchestrator | 2026-01-05 00:22:33.965862 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-01-05 00:22:33.965873 | orchestrator | Monday 05 January 2026 00:22:33 +0000 (0:00:05.628) 0:00:06.799 ******** 2026-01-05 00:22:33.965884 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:22:33.965895 | orchestrator | 2026-01-05 00:22:33.965906 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-01-05 00:22:33.965917 | orchestrator | Monday 05 January 2026 00:22:33 +0000 (0:00:00.072) 0:00:06.872 ******** 2026-01-05 00:22:33.965928 | orchestrator | changed: [testbed-manager] 2026-01-05 00:22:33.965939 | orchestrator | 2026-01-05 00:22:33.965950 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:22:33.965965 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:22:33.965979 | orchestrator | 2026-01-05 00:22:33.965991 | orchestrator | 2026-01-05 00:22:33.966004 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:22:33.966081 | orchestrator | Monday 05 January 2026 00:22:33 +0000 (0:00:00.521) 0:00:07.394 ******** 2026-01-05 00:22:33.966096 | orchestrator | =============================================================================== 2026-01-05 00:22:33.966108 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.63s 2026-01-05 00:22:33.966122 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.52s 2026-01-05 00:22:33.966134 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.52s 2026-01-05 00:22:33.966186 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.50s 2026-01-05 00:22:33.966201 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2026-01-05 00:22:34.175770 | orchestrator | + osism apply known-hosts 2026-01-05 00:22:46.068699 | orchestrator | 2026-01-05 00:22:46 | INFO  | Task 3b1d88a2-0248-466e-90cc-72fa07cce31c (known-hosts) was prepared for execution. 2026-01-05 00:22:46.068788 | orchestrator | 2026-01-05 00:22:46 | INFO  | It takes a moment until task 3b1d88a2-0248-466e-90cc-72fa07cce31c (known-hosts) has been started and output is visible here. 2026-01-05 00:23:03.476026 | orchestrator | 2026-01-05 00:23:03.476150 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-01-05 00:23:03.476167 | orchestrator | 2026-01-05 00:23:03.476179 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-01-05 00:23:03.476191 | orchestrator | Monday 05 January 2026 00:22:50 +0000 (0:00:00.171) 0:00:00.171 ******** 2026-01-05 00:23:03.476204 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-01-05 00:23:03.476215 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-01-05 00:23:03.476227 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-01-05 00:23:03.476238 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-01-05 00:23:03.476249 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-01-05 00:23:03.476260 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-01-05 00:23:03.476271 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-01-05 00:23:03.476282 | orchestrator | 2026-01-05 00:23:03.476293 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-01-05 00:23:03.476305 | orchestrator | Monday 05 January 2026 00:22:56 +0000 (0:00:06.159) 0:00:06.331 ******** 2026-01-05 00:23:03.476318 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-01-05 00:23:03.476331 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-01-05 00:23:03.476342 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-01-05 00:23:03.476353 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-01-05 00:23:03.476364 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-01-05 00:23:03.476386 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-01-05 00:23:03.476397 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-01-05 00:23:03.476408 | orchestrator | 2026-01-05 00:23:03.476420 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-05 00:23:03.476431 | orchestrator | Monday 05 January 2026 00:22:56 +0000 (0:00:00.168) 0:00:06.500 ******** 2026-01-05 00:23:03.476443 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIpYHHzVfRPSPP1xebsUJWcHexDqujjg7N4VIh7ir/iz) 2026-01-05 00:23:03.476464 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDEfheJnUEETIsZxoPsSMlKBPyiCtgDzJkOYC2m47oITc64f2cVj7JS8bFhgQu1v633cGVQyrdYOmfl8vF3jUVuMrcWYeMlegMk+pCADa5EamnxyHkmXDSefA5Jy+YlG5gy4xFCiAa302rnhAoO4nENVeOgcZFg9JFUJ2rf3PBafZIy1jE6IMbRGPFnkrT7VIzaYt3S48vhVdjLbjHCYxyAwfz7W/KEY2n9bIoHGOGHEVxPyLIuKVHIQr5Oq6IZwH6bwiWqjAslaFgNl9JKlDp0FZ6ZqBKuDql9B+V2/eVzfybAP0YYp+iY/9BKJgOLmtP8GxwLPw8ql2iRmfSIIKtxj0oRPl7ix1xvNQJwpZcGqQh1YxLdD2vxjjCQzgeKjrEfXX0fFlB6iyAXSyCVc3za58XCHUCKqmDIJs8pvKWBMfnm2h0oHugoKYoN69Fb75RTi6hcG+C18jeThCa76sd2g50VasQF1MnJAyVn2yCU2JiAePLVQdDcxI4UhcYPLn0=) 2026-01-05 00:23:03.476537 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDzfanaTmWgHvSFbBqPngitXlgRU/0zhKpzBGEG0nWOsjtuZdlkQ6bLnSfNZpI0IkYN4pJfmpsxkz3t+b0jtbjs=) 2026-01-05 00:23:03.476554 | orchestrator | 2026-01-05 00:23:03.476567 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-05 00:23:03.476580 | orchestrator | Monday 05 January 2026 00:22:57 +0000 (0:00:01.243) 0:00:07.743 ******** 2026-01-05 00:23:03.476592 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKluS7k1Eet+2gwUqSLepZyoJxLVfV0fHEMxq6jh4TJRd/z0VGcXqVWzeKa6WotK/DCxIVMTyymzMT7ZzkrMtps=) 2026-01-05 00:23:03.476635 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDFj0z2IqHtl6C8XXlxxfn1Z6YWL7g9rVLHX+5zTQ7SSRF4nRLcT8D955v++M/8tIptdsdUvsMOBm92t3lTCnaaMsVoKg8pXc+WSq0EZ4l2IesnpQXA1v6Q3XfuvkA2aQSH7oUDF7pcqJMm7SHWNoMEq7i79Q5xuCJOYf1Pw9xe+KBH3qgA1J+yX0dC2Qz/T3e3BygD5G1qfpC4iv7KEkzzWT8EyRH4/TuvqI0ZA8jdbdHqy0b8B49qReKyobvOrbLiKnA5u439LG1lMkVj9tX1ywJeOt5zuXFNOdwssntcRK2y1j6ff4ueTWvi0XTTcd6fFgVP3RCZW/YlUSvQBhrweqIIiWyuZgr9ylIwHQlb6+J+MsufwLRMrHQo5xC88/ZB/amjZVRKZ7kBwhoqx2qlWamgHIYcI7bZRA2Bc/t9jZFriIyOWoadlxijVkuQ613Edvlt9sCChFO/nt6oxiwHGGwzcmac7sKk7Ejpp3R20IlSnPmJ/NNF7fC8KhT1oN8=) 2026-01-05 00:23:03.476651 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMBM7dUUznTCjVXf78NnckEx0GRhz4liGFPpuTIOLsuC) 2026-01-05 00:23:03.476664 | orchestrator | 2026-01-05 00:23:03.476678 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-05 00:23:03.476691 | orchestrator | Monday 05 January 2026 00:22:59 +0000 (0:00:01.138) 0:00:08.882 ******** 2026-01-05 00:23:03.476704 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIB2YB6BVK0nnctmNsicjlEjnGSuqI1gjpq6EmDUNlYXq) 2026-01-05 00:23:03.476718 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQChFnCF5HT4HLnYKvF2lZAxzwveKlYdadpxR8hWSxhXym+916kWzJchWz1/eQpC3c5Iafxq0AHQJXoquvrdyP0Z4jAH6gO3RtBJyJm01YreUYJmd0i7b346GwAPR+fbbrFWYLcYRdNg/CHzqQYXmqaR4vbb8czmNP4stXk1jxfHF0xji8HdOLhAzG5h44Vj2IzSxlRnLPmdP/M5+QHgIHrJ0xjUUH4ZjGy9o/nDf1/KwZ4Y0myxR+LnDv2sHatWtvtAkFqFECf/9TOvWcel6ACNYWpZezlGgIyDRntRHx+3pdDXQqcoevErSo0rmfjSwjZ2UPiKkk7c2wfDzG1Rz1eYW5ugRR+P88XsbUZ2Rbb+AzTLLpdYv+piE/FZLMOk5cYZo3Ln6A1qWvq9BwZLGVLudZIdLFuWxEB2ZTXhugxUCt7ZKNIt0rwJyRU7iBgwOQ7JOKjrlPshOusjEXafCxkdLlZ7LKNalljp7HOfTLe4OA6LPWGn9NbiemXC4G1Rpl8=) 2026-01-05 00:23:03.476732 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKD4QG3zLWA079HyCtQOHeh6s325HbXCoeS102GBvicLkmDm2CUM60xUEENcuvdqPPNWyhFgLG1cr6MhV77XO3Q=) 2026-01-05 00:23:03.476744 | orchestrator | 2026-01-05 00:23:03.476756 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-05 00:23:03.476769 | orchestrator | Monday 05 January 2026 00:23:00 +0000 (0:00:01.088) 0:00:09.970 ******** 2026-01-05 00:23:03.476782 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAaqEJ3oxyvXtNFsTOOn6iWGs3u4d5qMM/OWVJnr69n+) 2026-01-05 00:23:03.476796 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtYJu6v/hSJch6iyGMmXrNwDo6u1LjQKP7K+uaD32IrKgNpHnAWlTW1du3UoUbUd9on/wI5eblyxitbw3oxDVOWStQKPOKXgkUx9rgPsACmcz4YBWsVZNEX1VhEFfTj+MZ6bB5D8QMYqFT71uAzMqUewNhBDh/ipVJyh6QMor1HGCMUYFzM/GxT4SmRexkqPr4oA2QhPRjld+YAok8rgVJbzv8fswG64McTlK1G4kiXXraX7BkLqO1Ij1giweZV1JTlylAMVvv8WjugtBlBWt9c1RWVaHxvVOCRbxiif7AnpEjATjfkmIC8oiXL6hWcvv0YJ+L8ZQ2+5zwIdCxwA/8fbR79tL1q6ko307cm7s0mazdS7OQo2ZwcX+wDseGrnQqoqBRvWgajYqh1eb1x0eDepVrxB2OGgkmG/PcY6EJwuAyHim/etBXf0AjbKxlq96Gcg+8G4ktxtkKRH7Uk6FeCb1xEKmGNV/HxMl7wmUWsAZJVNFjtC9fc7yJIkpd0Ys=) 2026-01-05 00:23:03.476818 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIrzglcGeNCCbPLdi/3NZjwrD6J3AHud3v58QtVXQRKOeI1v+JHZrH2/eUvJwtneVNN7X8kbZ623lS0+P0zuJZU=) 2026-01-05 00:23:03.476831 | orchestrator | 2026-01-05 00:23:03.476844 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-05 00:23:03.476857 | orchestrator | Monday 05 January 2026 00:23:01 +0000 (0:00:01.137) 0:00:11.107 ******** 2026-01-05 00:23:03.476949 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/R7CcdmU2o97v3AvO3EFrcrC23PY7NJlud/PLh7wkPjSVRkfGVik6iowi1AeX2dBMSV9RYbkKwHSDWKU3h0/juCOhdbjzPxJhZU3LKERflO9mAyCD6HSL3WtxOsJ6ae+8j9mNpsJfZp62syEvMdIbmh5PyqulcbTVOsSLsFV31zxdd5ukwypM3Fdyif7ETRZiEb38h101SZJTHYXbrzAmYXZu4ZO9awQjaRm2H+b3MPewllnNulo0aWYlRvdyCpBo+SVET8Adbz71qYVl7qReFVzwxe+3KzFHrsCp8w0G52/Xo4vE6iEpApxHh9M5PGV6xQUX6z7CfICRY3phHOn+/wUgjSWzJphytfCFSspzcM9M3CG/IXTjkEWhGoxv4loMrDyWeHQuW2aIDR/Uvr38j6tiQbNA+v7QM7oMYV7g4bh6aRCvq0a0oqbTmFkRfkNDg2Q6gWixjQi3r2cL302j/bhzIJdQf5G06HeaqdtrLlrwL5nsAhy2yXHfPWTalQs=) 2026-01-05 00:23:03.476962 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGht8afGu/SQbugUSh1yNpJSj+K5EmIbYtM4bArX49Bt4kq14iM5NenxesU/agxEThjKavgGf2C+27hZaU+wHuQ=) 2026-01-05 00:23:03.476973 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDBK8QR0poiIn7Dli2hgooTnHbdIkgGHTnNUIJzbTUGM) 2026-01-05 00:23:03.476984 | orchestrator | 2026-01-05 00:23:03.476995 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-05 00:23:03.477006 | orchestrator | Monday 05 January 2026 00:23:02 +0000 (0:00:01.086) 0:00:12.194 ******** 2026-01-05 00:23:03.477023 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIpEKpQI2DaEdcJxaBF9BLYb/UCMO/weB1put3hvzZo6R5z4botv3kqocMi62ycI5OGtzhzdmzpbReVYRUhkZ1w=) 2026-01-05 00:23:14.659285 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDOuaG0I6sCJF2DvXLdFE69pva0+vBIXHVU95nhE/MRLDwd4Tj/JPMJSOB7xXeYvrfIIwtfd/Y1eC7+NxaWdycjxRBqpMCzVwyVyxpTfAOWSvElTWYYnKWCg/Thw+D1I6931aGQENedQiaJJsvIFBOF4RkhP9Nh6neKMrIKFPb7xXmeCm4CjfAc05oflxjhUYJsIc2lnefQfSc3Hi5B4QmtNk46NxewdS8j+HNndFQUnxuwWLJRBUoptjeGLBxOC5ePeJMFY+Zc/0Jb5QHgu0Eo49bTEH6fVSqzNOyAct5YNWR/MKFj2kBeX4+fINPxlCNu/isXHOJR8sY5jNZN7fA3De0qYjP/+iKN4XN5SrB5Ofcv6trHF4N4fATbRYRgSWADyE/KeMg4omn0fzyXPFwoxqy+ST0rGqXgN2itmD0Lo7PgPer6lr7FmwX1RUYGhdxpqalZr5by+EH521RLu0xivpn2NErZZJAlLw9DCDQ7U0tfvosg11iLMbJs65dGdl0=) 2026-01-05 00:23:14.659411 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKI08kWekFGBSgg5cyBcMqMH6Jx0/A+zYTssLKr6E9aB) 2026-01-05 00:23:14.659429 | orchestrator | 2026-01-05 00:23:14.659442 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-05 00:23:14.659455 | orchestrator | Monday 05 January 2026 00:23:03 +0000 (0:00:01.144) 0:00:13.338 ******** 2026-01-05 00:23:14.659468 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPS5Ihfjp/SZObj5yPhJ6Eofd6DRYKlZHK9ZlVQ1ys6rpec5otDSqKjCG0Dq5OXjnd29YlqyLwHB3eqZ/VFQKgg=) 2026-01-05 00:23:14.659481 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC1u+yOiweloIyvBV9iGVmRiDfwQnfIhj5VvqHiwBxzl/YyIcbixzegrsPUpWBkk7hfDG4bdBer696OUjYcwBetqNSXhaOlPyDtLjsGMMCBB5DHyTqV5Q6oQwE1tFu19GsQu/e1R501OtZuYgXSQgPMCf608mJCmZqHftxST1SIfwgw53toKvbMrKL0tmBkAn4tvWZyT6/FYssF7u0iRoJGMhL7PGXrgsmoe10vofwxrr/jCeetAKAulbyuMpBemHPlIkZ0VQ+YO9BniFZd8QXEgOXnOJsGM4C1nK9ZL2cgP0hsWIcmgTXpSf8Ajf7g4eJbPVf/ZtpPxfpY0CvgU5LZ9sChlbjnxRZMWgrWft47OHM69g2WHerKgFr8D0kQF2ao+27T0LGkZoT7akoCIOobFpSt1L0BmezDcYMgK0SCvzBUUYqD5tyhGg0MxOcoILi51kSM0E9en99pA6NUiUaPrTT4bv9IkilZRHBog1avA9TMIV4XjS3oY7RgpsEvmVU=) 2026-01-05 00:23:14.659565 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHHJFO0jkwXQREbkYBZ+kQ2ovEUHLr/qhsPl8e2Yluw6) 2026-01-05 00:23:14.659579 | orchestrator | 2026-01-05 00:23:14.659590 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-01-05 00:23:14.659602 | orchestrator | Monday 05 January 2026 00:23:04 +0000 (0:00:01.053) 0:00:14.392 ******** 2026-01-05 00:23:14.659614 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-01-05 00:23:14.659625 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-01-05 00:23:14.659636 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-01-05 00:23:14.659647 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-01-05 00:23:14.659658 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-01-05 00:23:14.659669 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-01-05 00:23:14.659679 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-01-05 00:23:14.659690 | orchestrator | 2026-01-05 00:23:14.659701 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-01-05 00:23:14.659713 | orchestrator | Monday 05 January 2026 00:23:10 +0000 (0:00:05.525) 0:00:19.917 ******** 2026-01-05 00:23:14.659725 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-01-05 00:23:14.659738 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-01-05 00:23:14.659749 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-01-05 00:23:14.659760 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-01-05 00:23:14.659771 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-01-05 00:23:14.659782 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-01-05 00:23:14.659793 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-01-05 00:23:14.659804 | orchestrator | 2026-01-05 00:23:14.659835 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-05 00:23:14.659848 | orchestrator | Monday 05 January 2026 00:23:10 +0000 (0:00:00.172) 0:00:20.089 ******** 2026-01-05 00:23:14.659861 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDzfanaTmWgHvSFbBqPngitXlgRU/0zhKpzBGEG0nWOsjtuZdlkQ6bLnSfNZpI0IkYN4pJfmpsxkz3t+b0jtbjs=) 2026-01-05 00:23:14.659877 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDEfheJnUEETIsZxoPsSMlKBPyiCtgDzJkOYC2m47oITc64f2cVj7JS8bFhgQu1v633cGVQyrdYOmfl8vF3jUVuMrcWYeMlegMk+pCADa5EamnxyHkmXDSefA5Jy+YlG5gy4xFCiAa302rnhAoO4nENVeOgcZFg9JFUJ2rf3PBafZIy1jE6IMbRGPFnkrT7VIzaYt3S48vhVdjLbjHCYxyAwfz7W/KEY2n9bIoHGOGHEVxPyLIuKVHIQr5Oq6IZwH6bwiWqjAslaFgNl9JKlDp0FZ6ZqBKuDql9B+V2/eVzfybAP0YYp+iY/9BKJgOLmtP8GxwLPw8ql2iRmfSIIKtxj0oRPl7ix1xvNQJwpZcGqQh1YxLdD2vxjjCQzgeKjrEfXX0fFlB6iyAXSyCVc3za58XCHUCKqmDIJs8pvKWBMfnm2h0oHugoKYoN69Fb75RTi6hcG+C18jeThCa76sd2g50VasQF1MnJAyVn2yCU2JiAePLVQdDcxI4UhcYPLn0=) 2026-01-05 00:23:14.659918 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIpYHHzVfRPSPP1xebsUJWcHexDqujjg7N4VIh7ir/iz) 2026-01-05 00:23:14.659932 | orchestrator | 2026-01-05 00:23:14.659944 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-05 00:23:14.659962 | orchestrator | Monday 05 January 2026 00:23:11 +0000 (0:00:01.094) 0:00:21.184 ******** 2026-01-05 00:23:14.659975 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKluS7k1Eet+2gwUqSLepZyoJxLVfV0fHEMxq6jh4TJRd/z0VGcXqVWzeKa6WotK/DCxIVMTyymzMT7ZzkrMtps=) 2026-01-05 00:23:14.659988 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDFj0z2IqHtl6C8XXlxxfn1Z6YWL7g9rVLHX+5zTQ7SSRF4nRLcT8D955v++M/8tIptdsdUvsMOBm92t3lTCnaaMsVoKg8pXc+WSq0EZ4l2IesnpQXA1v6Q3XfuvkA2aQSH7oUDF7pcqJMm7SHWNoMEq7i79Q5xuCJOYf1Pw9xe+KBH3qgA1J+yX0dC2Qz/T3e3BygD5G1qfpC4iv7KEkzzWT8EyRH4/TuvqI0ZA8jdbdHqy0b8B49qReKyobvOrbLiKnA5u439LG1lMkVj9tX1ywJeOt5zuXFNOdwssntcRK2y1j6ff4ueTWvi0XTTcd6fFgVP3RCZW/YlUSvQBhrweqIIiWyuZgr9ylIwHQlb6+J+MsufwLRMrHQo5xC88/ZB/amjZVRKZ7kBwhoqx2qlWamgHIYcI7bZRA2Bc/t9jZFriIyOWoadlxijVkuQ613Edvlt9sCChFO/nt6oxiwHGGwzcmac7sKk7Ejpp3R20IlSnPmJ/NNF7fC8KhT1oN8=) 2026-01-05 00:23:14.660002 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMBM7dUUznTCjVXf78NnckEx0GRhz4liGFPpuTIOLsuC) 2026-01-05 00:23:14.660014 | orchestrator | 2026-01-05 00:23:14.660027 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-05 00:23:14.660039 | orchestrator | Monday 05 January 2026 00:23:12 +0000 (0:00:01.099) 0:00:22.283 ******** 2026-01-05 00:23:14.660052 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKD4QG3zLWA079HyCtQOHeh6s325HbXCoeS102GBvicLkmDm2CUM60xUEENcuvdqPPNWyhFgLG1cr6MhV77XO3Q=) 2026-01-05 00:23:14.660065 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIB2YB6BVK0nnctmNsicjlEjnGSuqI1gjpq6EmDUNlYXq) 2026-01-05 00:23:14.660076 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQChFnCF5HT4HLnYKvF2lZAxzwveKlYdadpxR8hWSxhXym+916kWzJchWz1/eQpC3c5Iafxq0AHQJXoquvrdyP0Z4jAH6gO3RtBJyJm01YreUYJmd0i7b346GwAPR+fbbrFWYLcYRdNg/CHzqQYXmqaR4vbb8czmNP4stXk1jxfHF0xji8HdOLhAzG5h44Vj2IzSxlRnLPmdP/M5+QHgIHrJ0xjUUH4ZjGy9o/nDf1/KwZ4Y0myxR+LnDv2sHatWtvtAkFqFECf/9TOvWcel6ACNYWpZezlGgIyDRntRHx+3pdDXQqcoevErSo0rmfjSwjZ2UPiKkk7c2wfDzG1Rz1eYW5ugRR+P88XsbUZ2Rbb+AzTLLpdYv+piE/FZLMOk5cYZo3Ln6A1qWvq9BwZLGVLudZIdLFuWxEB2ZTXhugxUCt7ZKNIt0rwJyRU7iBgwOQ7JOKjrlPshOusjEXafCxkdLlZ7LKNalljp7HOfTLe4OA6LPWGn9NbiemXC4G1Rpl8=) 2026-01-05 00:23:14.660087 | orchestrator | 2026-01-05 00:23:14.660098 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-05 00:23:14.660109 | orchestrator | Monday 05 January 2026 00:23:13 +0000 (0:00:01.118) 0:00:23.402 ******** 2026-01-05 00:23:14.660129 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtYJu6v/hSJch6iyGMmXrNwDo6u1LjQKP7K+uaD32IrKgNpHnAWlTW1du3UoUbUd9on/wI5eblyxitbw3oxDVOWStQKPOKXgkUx9rgPsACmcz4YBWsVZNEX1VhEFfTj+MZ6bB5D8QMYqFT71uAzMqUewNhBDh/ipVJyh6QMor1HGCMUYFzM/GxT4SmRexkqPr4oA2QhPRjld+YAok8rgVJbzv8fswG64McTlK1G4kiXXraX7BkLqO1Ij1giweZV1JTlylAMVvv8WjugtBlBWt9c1RWVaHxvVOCRbxiif7AnpEjATjfkmIC8oiXL6hWcvv0YJ+L8ZQ2+5zwIdCxwA/8fbR79tL1q6ko307cm7s0mazdS7OQo2ZwcX+wDseGrnQqoqBRvWgajYqh1eb1x0eDepVrxB2OGgkmG/PcY6EJwuAyHim/etBXf0AjbKxlq96Gcg+8G4ktxtkKRH7Uk6FeCb1xEKmGNV/HxMl7wmUWsAZJVNFjtC9fc7yJIkpd0Ys=) 2026-01-05 00:23:19.208825 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIrzglcGeNCCbPLdi/3NZjwrD6J3AHud3v58QtVXQRKOeI1v+JHZrH2/eUvJwtneVNN7X8kbZ623lS0+P0zuJZU=) 2026-01-05 00:23:19.208951 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAaqEJ3oxyvXtNFsTOOn6iWGs3u4d5qMM/OWVJnr69n+) 2026-01-05 00:23:19.208960 | orchestrator | 2026-01-05 00:23:19.208968 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-05 00:23:19.208975 | orchestrator | Monday 05 January 2026 00:23:14 +0000 (0:00:01.118) 0:00:24.520 ******** 2026-01-05 00:23:19.208980 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGht8afGu/SQbugUSh1yNpJSj+K5EmIbYtM4bArX49Bt4kq14iM5NenxesU/agxEThjKavgGf2C+27hZaU+wHuQ=) 2026-01-05 00:23:19.208987 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/R7CcdmU2o97v3AvO3EFrcrC23PY7NJlud/PLh7wkPjSVRkfGVik6iowi1AeX2dBMSV9RYbkKwHSDWKU3h0/juCOhdbjzPxJhZU3LKERflO9mAyCD6HSL3WtxOsJ6ae+8j9mNpsJfZp62syEvMdIbmh5PyqulcbTVOsSLsFV31zxdd5ukwypM3Fdyif7ETRZiEb38h101SZJTHYXbrzAmYXZu4ZO9awQjaRm2H+b3MPewllnNulo0aWYlRvdyCpBo+SVET8Adbz71qYVl7qReFVzwxe+3KzFHrsCp8w0G52/Xo4vE6iEpApxHh9M5PGV6xQUX6z7CfICRY3phHOn+/wUgjSWzJphytfCFSspzcM9M3CG/IXTjkEWhGoxv4loMrDyWeHQuW2aIDR/Uvr38j6tiQbNA+v7QM7oMYV7g4bh6aRCvq0a0oqbTmFkRfkNDg2Q6gWixjQi3r2cL302j/bhzIJdQf5G06HeaqdtrLlrwL5nsAhy2yXHfPWTalQs=) 2026-01-05 00:23:19.208995 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDBK8QR0poiIn7Dli2hgooTnHbdIkgGHTnNUIJzbTUGM) 2026-01-05 00:23:19.209000 | orchestrator | 2026-01-05 00:23:19.209006 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-05 00:23:19.209011 | orchestrator | Monday 05 January 2026 00:23:15 +0000 (0:00:01.119) 0:00:25.639 ******** 2026-01-05 00:23:19.209017 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDOuaG0I6sCJF2DvXLdFE69pva0+vBIXHVU95nhE/MRLDwd4Tj/JPMJSOB7xXeYvrfIIwtfd/Y1eC7+NxaWdycjxRBqpMCzVwyVyxpTfAOWSvElTWYYnKWCg/Thw+D1I6931aGQENedQiaJJsvIFBOF4RkhP9Nh6neKMrIKFPb7xXmeCm4CjfAc05oflxjhUYJsIc2lnefQfSc3Hi5B4QmtNk46NxewdS8j+HNndFQUnxuwWLJRBUoptjeGLBxOC5ePeJMFY+Zc/0Jb5QHgu0Eo49bTEH6fVSqzNOyAct5YNWR/MKFj2kBeX4+fINPxlCNu/isXHOJR8sY5jNZN7fA3De0qYjP/+iKN4XN5SrB5Ofcv6trHF4N4fATbRYRgSWADyE/KeMg4omn0fzyXPFwoxqy+ST0rGqXgN2itmD0Lo7PgPer6lr7FmwX1RUYGhdxpqalZr5by+EH521RLu0xivpn2NErZZJAlLw9DCDQ7U0tfvosg11iLMbJs65dGdl0=) 2026-01-05 00:23:19.209023 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIpEKpQI2DaEdcJxaBF9BLYb/UCMO/weB1put3hvzZo6R5z4botv3kqocMi62ycI5OGtzhzdmzpbReVYRUhkZ1w=) 2026-01-05 00:23:19.209028 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKI08kWekFGBSgg5cyBcMqMH6Jx0/A+zYTssLKr6E9aB) 2026-01-05 00:23:19.209034 | orchestrator | 2026-01-05 00:23:19.209039 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-05 00:23:19.209044 | orchestrator | Monday 05 January 2026 00:23:16 +0000 (0:00:01.137) 0:00:26.776 ******** 2026-01-05 00:23:19.209049 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC1u+yOiweloIyvBV9iGVmRiDfwQnfIhj5VvqHiwBxzl/YyIcbixzegrsPUpWBkk7hfDG4bdBer696OUjYcwBetqNSXhaOlPyDtLjsGMMCBB5DHyTqV5Q6oQwE1tFu19GsQu/e1R501OtZuYgXSQgPMCf608mJCmZqHftxST1SIfwgw53toKvbMrKL0tmBkAn4tvWZyT6/FYssF7u0iRoJGMhL7PGXrgsmoe10vofwxrr/jCeetAKAulbyuMpBemHPlIkZ0VQ+YO9BniFZd8QXEgOXnOJsGM4C1nK9ZL2cgP0hsWIcmgTXpSf8Ajf7g4eJbPVf/ZtpPxfpY0CvgU5LZ9sChlbjnxRZMWgrWft47OHM69g2WHerKgFr8D0kQF2ao+27T0LGkZoT7akoCIOobFpSt1L0BmezDcYMgK0SCvzBUUYqD5tyhGg0MxOcoILi51kSM0E9en99pA6NUiUaPrTT4bv9IkilZRHBog1avA9TMIV4XjS3oY7RgpsEvmVU=) 2026-01-05 00:23:19.209070 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHHJFO0jkwXQREbkYBZ+kQ2ovEUHLr/qhsPl8e2Yluw6) 2026-01-05 00:23:19.209075 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPS5Ihfjp/SZObj5yPhJ6Eofd6DRYKlZHK9ZlVQ1ys6rpec5otDSqKjCG0Dq5OXjnd29YlqyLwHB3eqZ/VFQKgg=) 2026-01-05 00:23:19.209086 | orchestrator | 2026-01-05 00:23:19.209091 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-01-05 00:23:19.209096 | orchestrator | Monday 05 January 2026 00:23:18 +0000 (0:00:01.113) 0:00:27.890 ******** 2026-01-05 00:23:19.209103 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-01-05 00:23:19.209109 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-01-05 00:23:19.209125 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-01-05 00:23:19.209130 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-01-05 00:23:19.209135 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-05 00:23:19.209140 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-01-05 00:23:19.209145 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-01-05 00:23:19.209151 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:23:19.209156 | orchestrator | 2026-01-05 00:23:19.209161 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-01-05 00:23:19.209167 | orchestrator | Monday 05 January 2026 00:23:18 +0000 (0:00:00.152) 0:00:28.043 ******** 2026-01-05 00:23:19.209172 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:23:19.209177 | orchestrator | 2026-01-05 00:23:19.209182 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-01-05 00:23:19.209187 | orchestrator | Monday 05 January 2026 00:23:18 +0000 (0:00:00.053) 0:00:28.096 ******** 2026-01-05 00:23:19.209192 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:23:19.209198 | orchestrator | 2026-01-05 00:23:19.209203 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-01-05 00:23:19.209208 | orchestrator | Monday 05 January 2026 00:23:18 +0000 (0:00:00.044) 0:00:28.141 ******** 2026-01-05 00:23:19.209213 | orchestrator | changed: [testbed-manager] 2026-01-05 00:23:19.209218 | orchestrator | 2026-01-05 00:23:19.209223 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:23:19.209232 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-05 00:23:19.209239 | orchestrator | 2026-01-05 00:23:19.209244 | orchestrator | 2026-01-05 00:23:19.209249 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:23:19.209255 | orchestrator | Monday 05 January 2026 00:23:18 +0000 (0:00:00.714) 0:00:28.856 ******** 2026-01-05 00:23:19.209260 | orchestrator | =============================================================================== 2026-01-05 00:23:19.209265 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.16s 2026-01-05 00:23:19.209270 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.53s 2026-01-05 00:23:19.209277 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.24s 2026-01-05 00:23:19.209282 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2026-01-05 00:23:19.209287 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2026-01-05 00:23:19.209292 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2026-01-05 00:23:19.209297 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2026-01-05 00:23:19.209302 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-01-05 00:23:19.209307 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-01-05 00:23:19.209312 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-01-05 00:23:19.209317 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2026-01-05 00:23:19.209322 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-01-05 00:23:19.209333 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-01-05 00:23:19.209338 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-01-05 00:23:19.209343 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-01-05 00:23:19.209349 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-01-05 00:23:19.209354 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.71s 2026-01-05 00:23:19.209359 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2026-01-05 00:23:19.209364 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.17s 2026-01-05 00:23:19.209371 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.15s 2026-01-05 00:23:19.526325 | orchestrator | + osism apply squid 2026-01-05 00:23:31.586441 | orchestrator | 2026-01-05 00:23:31 | INFO  | Task 54e4face-a0de-4994-955f-163f4282e203 (squid) was prepared for execution. 2026-01-05 00:23:31.586617 | orchestrator | 2026-01-05 00:23:31 | INFO  | It takes a moment until task 54e4face-a0de-4994-955f-163f4282e203 (squid) has been started and output is visible here. 2026-01-05 00:25:27.016186 | orchestrator | 2026-01-05 00:25:27.016316 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-01-05 00:25:27.016333 | orchestrator | 2026-01-05 00:25:27.016346 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-01-05 00:25:27.016358 | orchestrator | Monday 05 January 2026 00:23:35 +0000 (0:00:00.178) 0:00:00.178 ******** 2026-01-05 00:25:27.016370 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-01-05 00:25:27.016382 | orchestrator | 2026-01-05 00:25:27.016412 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-01-05 00:25:27.016436 | orchestrator | Monday 05 January 2026 00:23:35 +0000 (0:00:00.092) 0:00:00.270 ******** 2026-01-05 00:25:27.016448 | orchestrator | ok: [testbed-manager] 2026-01-05 00:25:27.016460 | orchestrator | 2026-01-05 00:25:27.016472 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-01-05 00:25:27.016483 | orchestrator | Monday 05 January 2026 00:23:37 +0000 (0:00:01.622) 0:00:01.893 ******** 2026-01-05 00:25:27.016495 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-01-05 00:25:27.016506 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-01-05 00:25:27.016517 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-01-05 00:25:27.016529 | orchestrator | 2026-01-05 00:25:27.016540 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-01-05 00:25:27.016551 | orchestrator | Monday 05 January 2026 00:23:38 +0000 (0:00:01.179) 0:00:03.072 ******** 2026-01-05 00:25:27.016562 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-01-05 00:25:27.016574 | orchestrator | 2026-01-05 00:25:27.016727 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-01-05 00:25:27.016753 | orchestrator | Monday 05 January 2026 00:23:40 +0000 (0:00:01.266) 0:00:04.338 ******** 2026-01-05 00:25:27.016783 | orchestrator | ok: [testbed-manager] 2026-01-05 00:25:27.016804 | orchestrator | 2026-01-05 00:25:27.016822 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-01-05 00:25:27.016843 | orchestrator | Monday 05 January 2026 00:23:40 +0000 (0:00:00.369) 0:00:04.708 ******** 2026-01-05 00:25:27.016862 | orchestrator | changed: [testbed-manager] 2026-01-05 00:25:27.016880 | orchestrator | 2026-01-05 00:25:27.016907 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-01-05 00:25:27.016926 | orchestrator | Monday 05 January 2026 00:23:41 +0000 (0:00:00.968) 0:00:05.676 ******** 2026-01-05 00:25:27.016945 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-01-05 00:25:27.017008 | orchestrator | ok: [testbed-manager] 2026-01-05 00:25:27.017031 | orchestrator | 2026-01-05 00:25:27.017044 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-01-05 00:25:27.017060 | orchestrator | Monday 05 January 2026 00:24:13 +0000 (0:00:32.537) 0:00:38.213 ******** 2026-01-05 00:25:27.017079 | orchestrator | changed: [testbed-manager] 2026-01-05 00:25:27.017097 | orchestrator | 2026-01-05 00:25:27.017115 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-01-05 00:25:27.017134 | orchestrator | Monday 05 January 2026 00:24:25 +0000 (0:00:12.002) 0:00:50.215 ******** 2026-01-05 00:25:27.017153 | orchestrator | Pausing for 60 seconds 2026-01-05 00:25:27.017173 | orchestrator | changed: [testbed-manager] 2026-01-05 00:25:27.017188 | orchestrator | 2026-01-05 00:25:27.017340 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-01-05 00:25:27.017362 | orchestrator | Monday 05 January 2026 00:25:26 +0000 (0:01:00.090) 0:01:50.306 ******** 2026-01-05 00:25:27.017383 | orchestrator | ok: [testbed-manager] 2026-01-05 00:25:27.017403 | orchestrator | 2026-01-05 00:25:27.017421 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-01-05 00:25:27.017440 | orchestrator | Monday 05 January 2026 00:25:26 +0000 (0:00:00.079) 0:01:50.385 ******** 2026-01-05 00:25:27.017458 | orchestrator | changed: [testbed-manager] 2026-01-05 00:25:27.017477 | orchestrator | 2026-01-05 00:25:27.017495 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:25:27.017513 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:25:27.017525 | orchestrator | 2026-01-05 00:25:27.017536 | orchestrator | 2026-01-05 00:25:27.017547 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:25:27.017558 | orchestrator | Monday 05 January 2026 00:25:26 +0000 (0:00:00.633) 0:01:51.018 ******** 2026-01-05 00:25:27.017568 | orchestrator | =============================================================================== 2026-01-05 00:25:27.017579 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.09s 2026-01-05 00:25:27.017590 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 32.54s 2026-01-05 00:25:27.017655 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.00s 2026-01-05 00:25:27.017675 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.62s 2026-01-05 00:25:27.017692 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.27s 2026-01-05 00:25:27.017707 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.18s 2026-01-05 00:25:27.017726 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.97s 2026-01-05 00:25:27.017746 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.63s 2026-01-05 00:25:27.017764 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.37s 2026-01-05 00:25:27.017781 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.09s 2026-01-05 00:25:27.017793 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.08s 2026-01-05 00:25:27.403952 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-01-05 00:25:27.404077 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-01-05 00:25:27.459964 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-05 00:25:27.460045 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-01-05 00:25:27.467654 | orchestrator | + set -e 2026-01-05 00:25:27.467718 | orchestrator | + NAMESPACE=kolla/release 2026-01-05 00:25:27.467727 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-01-05 00:25:27.471601 | orchestrator | ++ semver 9.5.0 9.0.0 2026-01-05 00:25:27.540452 | orchestrator | + [[ 1 -lt 0 ]] 2026-01-05 00:25:27.541234 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-01-05 00:25:39.615786 | orchestrator | 2026-01-05 00:25:39 | INFO  | Task 8bb0914d-87a6-451e-9b4e-4b679f7e6eeb (operator) was prepared for execution. 2026-01-05 00:25:39.615937 | orchestrator | 2026-01-05 00:25:39 | INFO  | It takes a moment until task 8bb0914d-87a6-451e-9b4e-4b679f7e6eeb (operator) has been started and output is visible here. 2026-01-05 00:25:56.098986 | orchestrator | 2026-01-05 00:25:56.099113 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-01-05 00:25:56.099126 | orchestrator | 2026-01-05 00:25:56.099135 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-05 00:25:56.099143 | orchestrator | Monday 05 January 2026 00:25:43 +0000 (0:00:00.142) 0:00:00.142 ******** 2026-01-05 00:25:56.099151 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:25:56.099161 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:25:56.099168 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:25:56.099174 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:25:56.099181 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:25:56.099187 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:25:56.099194 | orchestrator | 2026-01-05 00:25:56.099201 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-01-05 00:25:56.099208 | orchestrator | Monday 05 January 2026 00:25:47 +0000 (0:00:03.330) 0:00:03.472 ******** 2026-01-05 00:25:56.099215 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:25:56.099222 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:25:56.099229 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:25:56.099236 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:25:56.099244 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:25:56.099251 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:25:56.099258 | orchestrator | 2026-01-05 00:25:56.099265 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-01-05 00:25:56.099272 | orchestrator | 2026-01-05 00:25:56.099279 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-01-05 00:25:56.099286 | orchestrator | Monday 05 January 2026 00:25:47 +0000 (0:00:00.738) 0:00:04.210 ******** 2026-01-05 00:25:56.099293 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:25:56.099300 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:25:56.099308 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:25:56.099332 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:25:56.099340 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:25:56.099347 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:25:56.099354 | orchestrator | 2026-01-05 00:25:56.099362 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-01-05 00:25:56.099369 | orchestrator | Monday 05 January 2026 00:25:48 +0000 (0:00:00.175) 0:00:04.386 ******** 2026-01-05 00:25:56.099375 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:25:56.099382 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:25:56.099388 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:25:56.099394 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:25:56.099401 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:25:56.099407 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:25:56.099413 | orchestrator | 2026-01-05 00:25:56.099419 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-01-05 00:25:56.099425 | orchestrator | Monday 05 January 2026 00:25:48 +0000 (0:00:00.187) 0:00:04.573 ******** 2026-01-05 00:25:56.099431 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:25:56.099440 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:25:56.099446 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:25:56.099452 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:25:56.099459 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:25:56.099465 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:25:56.099472 | orchestrator | 2026-01-05 00:25:56.099480 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-01-05 00:25:56.099487 | orchestrator | Monday 05 January 2026 00:25:48 +0000 (0:00:00.666) 0:00:05.239 ******** 2026-01-05 00:25:56.099493 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:25:56.099500 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:25:56.099507 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:25:56.099538 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:25:56.099546 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:25:56.099552 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:25:56.099560 | orchestrator | 2026-01-05 00:25:56.099566 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-01-05 00:25:56.099573 | orchestrator | Monday 05 January 2026 00:25:49 +0000 (0:00:00.844) 0:00:06.084 ******** 2026-01-05 00:25:56.099581 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-01-05 00:25:56.099589 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-01-05 00:25:56.099596 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-01-05 00:25:56.099604 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-01-05 00:25:56.099613 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-01-05 00:25:56.099620 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-01-05 00:25:56.099627 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-01-05 00:25:56.099634 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-01-05 00:25:56.099642 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-01-05 00:25:56.099671 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-01-05 00:25:56.099679 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-01-05 00:25:56.099686 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-01-05 00:25:56.099693 | orchestrator | 2026-01-05 00:25:56.099699 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-01-05 00:25:56.099706 | orchestrator | Monday 05 January 2026 00:25:51 +0000 (0:00:01.255) 0:00:07.339 ******** 2026-01-05 00:25:56.099712 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:25:56.099718 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:25:56.099725 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:25:56.099732 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:25:56.099739 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:25:56.099747 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:25:56.099754 | orchestrator | 2026-01-05 00:25:56.099762 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-01-05 00:25:56.099771 | orchestrator | Monday 05 January 2026 00:25:52 +0000 (0:00:01.180) 0:00:08.519 ******** 2026-01-05 00:25:56.099777 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-01-05 00:25:56.099783 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-01-05 00:25:56.099789 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-01-05 00:25:56.099795 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-01-05 00:25:56.099821 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-01-05 00:25:56.099829 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-01-05 00:25:56.099835 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-01-05 00:25:56.099842 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-01-05 00:25:56.099848 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-01-05 00:25:56.099855 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-01-05 00:25:56.099861 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-01-05 00:25:56.099867 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-01-05 00:25:56.099873 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-01-05 00:25:56.099879 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-01-05 00:25:56.099886 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-01-05 00:25:56.099892 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-01-05 00:25:56.099898 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-01-05 00:25:56.099905 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-01-05 00:25:56.099922 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-01-05 00:25:56.099929 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-01-05 00:25:56.099936 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-01-05 00:25:56.099942 | orchestrator | 2026-01-05 00:25:56.099949 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-01-05 00:25:56.099957 | orchestrator | Monday 05 January 2026 00:25:53 +0000 (0:00:01.514) 0:00:10.034 ******** 2026-01-05 00:25:56.099963 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:25:56.099970 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:25:56.099977 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:25:56.099983 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:25:56.099990 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:25:56.099996 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:25:56.100002 | orchestrator | 2026-01-05 00:25:56.100010 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-01-05 00:25:56.100016 | orchestrator | Monday 05 January 2026 00:25:53 +0000 (0:00:00.225) 0:00:10.259 ******** 2026-01-05 00:25:56.100023 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:25:56.100029 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:25:56.100036 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:25:56.100043 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:25:56.100049 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:25:56.100055 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:25:56.100062 | orchestrator | 2026-01-05 00:25:56.100068 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-01-05 00:25:56.100075 | orchestrator | Monday 05 January 2026 00:25:54 +0000 (0:00:00.219) 0:00:10.478 ******** 2026-01-05 00:25:56.100081 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:25:56.100088 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:25:56.100094 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:25:56.100100 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:25:56.100106 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:25:56.100113 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:25:56.100119 | orchestrator | 2026-01-05 00:25:56.100126 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-01-05 00:25:56.100132 | orchestrator | Monday 05 January 2026 00:25:54 +0000 (0:00:00.594) 0:00:11.073 ******** 2026-01-05 00:25:56.100139 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:25:56.100145 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:25:56.100151 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:25:56.100157 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:25:56.100163 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:25:56.100169 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:25:56.100176 | orchestrator | 2026-01-05 00:25:56.100182 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-01-05 00:25:56.100199 | orchestrator | Monday 05 January 2026 00:25:54 +0000 (0:00:00.181) 0:00:11.255 ******** 2026-01-05 00:25:56.100207 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-01-05 00:25:56.100214 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:25:56.100220 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-05 00:25:56.100226 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-05 00:25:56.100232 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:25:56.100238 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:25:56.100244 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-05 00:25:56.100250 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:25:56.100256 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-01-05 00:25:56.100263 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:25:56.100269 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-05 00:25:56.100276 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:25:56.100290 | orchestrator | 2026-01-05 00:25:56.100297 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-01-05 00:25:56.100303 | orchestrator | Monday 05 January 2026 00:25:55 +0000 (0:00:00.758) 0:00:12.014 ******** 2026-01-05 00:25:56.100309 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:25:56.100316 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:25:56.100321 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:25:56.100327 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:25:56.100333 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:25:56.100338 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:25:56.100344 | orchestrator | 2026-01-05 00:25:56.100349 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-01-05 00:25:56.100355 | orchestrator | Monday 05 January 2026 00:25:55 +0000 (0:00:00.177) 0:00:12.191 ******** 2026-01-05 00:25:56.100360 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:25:56.100366 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:25:56.100372 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:25:56.100378 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:25:56.100394 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:25:57.469377 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:25:57.469494 | orchestrator | 2026-01-05 00:25:57.469509 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-01-05 00:25:57.469521 | orchestrator | Monday 05 January 2026 00:25:56 +0000 (0:00:00.191) 0:00:12.382 ******** 2026-01-05 00:25:57.469530 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:25:57.469539 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:25:57.469548 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:25:57.469556 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:25:57.469565 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:25:57.469574 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:25:57.469582 | orchestrator | 2026-01-05 00:25:57.469591 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-01-05 00:25:57.469600 | orchestrator | Monday 05 January 2026 00:25:56 +0000 (0:00:00.163) 0:00:12.546 ******** 2026-01-05 00:25:57.469609 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:25:57.469617 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:25:57.469626 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:25:57.469634 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:25:57.469643 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:25:57.469713 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:25:57.469723 | orchestrator | 2026-01-05 00:25:57.469732 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-01-05 00:25:57.469741 | orchestrator | Monday 05 January 2026 00:25:56 +0000 (0:00:00.679) 0:00:13.226 ******** 2026-01-05 00:25:57.469750 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:25:57.469759 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:25:57.469768 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:25:57.469795 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:25:57.469804 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:25:57.469813 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:25:57.469821 | orchestrator | 2026-01-05 00:25:57.469830 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:25:57.469841 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-05 00:25:57.469851 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-05 00:25:57.469860 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-05 00:25:57.469869 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-05 00:25:57.469901 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-05 00:25:57.469912 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-05 00:25:57.469922 | orchestrator | 2026-01-05 00:25:57.469931 | orchestrator | 2026-01-05 00:25:57.469942 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:25:57.469953 | orchestrator | Monday 05 January 2026 00:25:57 +0000 (0:00:00.259) 0:00:13.485 ******** 2026-01-05 00:25:57.469963 | orchestrator | =============================================================================== 2026-01-05 00:25:57.469973 | orchestrator | Gathering Facts --------------------------------------------------------- 3.33s 2026-01-05 00:25:57.469983 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.51s 2026-01-05 00:25:57.469995 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.26s 2026-01-05 00:25:57.470005 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.18s 2026-01-05 00:25:57.470014 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.84s 2026-01-05 00:25:57.470079 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.76s 2026-01-05 00:25:57.470089 | orchestrator | Do not require tty for all users ---------------------------------------- 0.74s 2026-01-05 00:25:57.470100 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.68s 2026-01-05 00:25:57.470109 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.67s 2026-01-05 00:25:57.470120 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.59s 2026-01-05 00:25:57.470130 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.26s 2026-01-05 00:25:57.470140 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.23s 2026-01-05 00:25:57.470151 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.22s 2026-01-05 00:25:57.470161 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.19s 2026-01-05 00:25:57.470171 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.19s 2026-01-05 00:25:57.470181 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.18s 2026-01-05 00:25:57.470190 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.18s 2026-01-05 00:25:57.470201 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.18s 2026-01-05 00:25:57.470211 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.16s 2026-01-05 00:25:57.781422 | orchestrator | + osism apply --environment custom facts 2026-01-05 00:25:59.740877 | orchestrator | 2026-01-05 00:25:59 | INFO  | Trying to run play facts in environment custom 2026-01-05 00:26:09.837236 | orchestrator | 2026-01-05 00:26:09 | INFO  | Task 56199973-fe0e-492f-a3b7-a0548374c194 (facts) was prepared for execution. 2026-01-05 00:26:09.837345 | orchestrator | 2026-01-05 00:26:09 | INFO  | It takes a moment until task 56199973-fe0e-492f-a3b7-a0548374c194 (facts) has been started and output is visible here. 2026-01-05 00:26:55.119951 | orchestrator | 2026-01-05 00:26:55.120074 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-01-05 00:26:55.120105 | orchestrator | 2026-01-05 00:26:55.120128 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-05 00:26:55.120139 | orchestrator | Monday 05 January 2026 00:26:13 +0000 (0:00:00.083) 0:00:00.083 ******** 2026-01-05 00:26:55.120149 | orchestrator | ok: [testbed-manager] 2026-01-05 00:26:55.120161 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:26:55.120171 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:26:55.120205 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:26:55.120215 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:26:55.120225 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:26:55.120234 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:26:55.120244 | orchestrator | 2026-01-05 00:26:55.120254 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-01-05 00:26:55.120264 | orchestrator | Monday 05 January 2026 00:26:15 +0000 (0:00:01.425) 0:00:01.508 ******** 2026-01-05 00:26:55.120274 | orchestrator | ok: [testbed-manager] 2026-01-05 00:26:55.120283 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:26:55.120293 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:26:55.120303 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:26:55.120312 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:26:55.120322 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:26:55.120331 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:26:55.120341 | orchestrator | 2026-01-05 00:26:55.120350 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-01-05 00:26:55.120359 | orchestrator | 2026-01-05 00:26:55.120369 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-05 00:26:55.120379 | orchestrator | Monday 05 January 2026 00:26:16 +0000 (0:00:01.221) 0:00:02.729 ******** 2026-01-05 00:26:55.120389 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:26:55.120398 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:26:55.120408 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:26:55.120417 | orchestrator | 2026-01-05 00:26:55.120427 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-05 00:26:55.120437 | orchestrator | Monday 05 January 2026 00:26:16 +0000 (0:00:00.111) 0:00:02.841 ******** 2026-01-05 00:26:55.120447 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:26:55.120456 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:26:55.120466 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:26:55.120476 | orchestrator | 2026-01-05 00:26:55.120487 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-05 00:26:55.120498 | orchestrator | Monday 05 January 2026 00:26:16 +0000 (0:00:00.212) 0:00:03.053 ******** 2026-01-05 00:26:55.120509 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:26:55.120520 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:26:55.120532 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:26:55.120543 | orchestrator | 2026-01-05 00:26:55.120555 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-05 00:26:55.120566 | orchestrator | Monday 05 January 2026 00:26:17 +0000 (0:00:00.231) 0:00:03.285 ******** 2026-01-05 00:26:55.120578 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:26:55.120590 | orchestrator | 2026-01-05 00:26:55.120602 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-05 00:26:55.120613 | orchestrator | Monday 05 January 2026 00:26:17 +0000 (0:00:00.156) 0:00:03.442 ******** 2026-01-05 00:26:55.120624 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:26:55.120635 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:26:55.120646 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:26:55.120657 | orchestrator | 2026-01-05 00:26:55.120669 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-05 00:26:55.120681 | orchestrator | Monday 05 January 2026 00:26:17 +0000 (0:00:00.451) 0:00:03.893 ******** 2026-01-05 00:26:55.120707 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:26:55.120719 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:26:55.120731 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:26:55.120742 | orchestrator | 2026-01-05 00:26:55.120753 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-05 00:26:55.120764 | orchestrator | Monday 05 January 2026 00:26:17 +0000 (0:00:00.151) 0:00:04.044 ******** 2026-01-05 00:26:55.120774 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:26:55.120786 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:26:55.120805 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:26:55.120815 | orchestrator | 2026-01-05 00:26:55.120825 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-05 00:26:55.120834 | orchestrator | Monday 05 January 2026 00:26:19 +0000 (0:00:01.090) 0:00:05.135 ******** 2026-01-05 00:26:55.120844 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:26:55.120853 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:26:55.120863 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:26:55.120872 | orchestrator | 2026-01-05 00:26:55.120882 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-05 00:26:55.120891 | orchestrator | Monday 05 January 2026 00:26:19 +0000 (0:00:00.473) 0:00:05.609 ******** 2026-01-05 00:26:55.120901 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:26:55.120911 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:26:55.120920 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:26:55.120929 | orchestrator | 2026-01-05 00:26:55.120992 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-05 00:26:55.121004 | orchestrator | Monday 05 January 2026 00:26:20 +0000 (0:00:01.096) 0:00:06.705 ******** 2026-01-05 00:26:55.121013 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:26:55.121023 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:26:55.121032 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:26:55.121042 | orchestrator | 2026-01-05 00:26:55.121051 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-01-05 00:26:55.121061 | orchestrator | Monday 05 January 2026 00:26:37 +0000 (0:00:16.612) 0:00:23.318 ******** 2026-01-05 00:26:55.121070 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:26:55.121080 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:26:55.121090 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:26:55.121099 | orchestrator | 2026-01-05 00:26:55.121109 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-01-05 00:26:55.121135 | orchestrator | Monday 05 January 2026 00:26:37 +0000 (0:00:00.144) 0:00:23.463 ******** 2026-01-05 00:26:55.121145 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:26:55.121155 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:26:55.121165 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:26:55.121174 | orchestrator | 2026-01-05 00:26:55.121184 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-05 00:26:55.121194 | orchestrator | Monday 05 January 2026 00:26:45 +0000 (0:00:08.202) 0:00:31.666 ******** 2026-01-05 00:26:55.121204 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:26:55.121213 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:26:55.121223 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:26:55.121233 | orchestrator | 2026-01-05 00:26:55.121242 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-01-05 00:26:55.121252 | orchestrator | Monday 05 January 2026 00:26:46 +0000 (0:00:00.464) 0:00:32.130 ******** 2026-01-05 00:26:55.121262 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-01-05 00:26:55.121277 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-01-05 00:26:55.121287 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-01-05 00:26:55.121297 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-01-05 00:26:55.121307 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-01-05 00:26:55.121317 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-01-05 00:26:55.121327 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-01-05 00:26:55.121336 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-01-05 00:26:55.121346 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-01-05 00:26:55.121356 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-01-05 00:26:55.121365 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-01-05 00:26:55.121382 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-01-05 00:26:55.121391 | orchestrator | 2026-01-05 00:26:55.121401 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-05 00:26:55.121411 | orchestrator | Monday 05 January 2026 00:26:49 +0000 (0:00:03.664) 0:00:35.795 ******** 2026-01-05 00:26:55.121420 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:26:55.121430 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:26:55.121440 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:26:55.121450 | orchestrator | 2026-01-05 00:26:55.121459 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-05 00:26:55.121469 | orchestrator | 2026-01-05 00:26:55.121479 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-05 00:26:55.121489 | orchestrator | Monday 05 January 2026 00:26:51 +0000 (0:00:01.530) 0:00:37.325 ******** 2026-01-05 00:26:55.121499 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:26:55.121509 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:26:55.121518 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:26:55.121528 | orchestrator | ok: [testbed-manager] 2026-01-05 00:26:55.121538 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:26:55.121547 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:26:55.121557 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:26:55.121567 | orchestrator | 2026-01-05 00:26:55.121576 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:26:55.121587 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:26:55.121598 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:26:55.121609 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:26:55.121618 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:26:55.121628 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:26:55.121638 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:26:55.121648 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:26:55.121657 | orchestrator | 2026-01-05 00:26:55.121667 | orchestrator | 2026-01-05 00:26:55.121677 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:26:55.121687 | orchestrator | Monday 05 January 2026 00:26:55 +0000 (0:00:03.885) 0:00:41.210 ******** 2026-01-05 00:26:55.121739 | orchestrator | =============================================================================== 2026-01-05 00:26:55.121749 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.61s 2026-01-05 00:26:55.121759 | orchestrator | Install required packages (Debian) -------------------------------------- 8.20s 2026-01-05 00:26:55.121768 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.89s 2026-01-05 00:26:55.121778 | orchestrator | Copy fact files --------------------------------------------------------- 3.66s 2026-01-05 00:26:55.121788 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.53s 2026-01-05 00:26:55.121797 | orchestrator | Create custom facts directory ------------------------------------------- 1.43s 2026-01-05 00:26:55.121813 | orchestrator | Copy fact file ---------------------------------------------------------- 1.22s 2026-01-05 00:26:55.389286 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.10s 2026-01-05 00:26:55.389404 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.09s 2026-01-05 00:26:55.389464 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.47s 2026-01-05 00:26:55.389483 | orchestrator | Create custom facts directory ------------------------------------------- 0.46s 2026-01-05 00:26:55.389494 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.45s 2026-01-05 00:26:55.389505 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.23s 2026-01-05 00:26:55.389516 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.21s 2026-01-05 00:26:55.389545 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.16s 2026-01-05 00:26:55.389558 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.15s 2026-01-05 00:26:55.389569 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.14s 2026-01-05 00:26:55.389580 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.11s 2026-01-05 00:26:55.749459 | orchestrator | + osism apply bootstrap 2026-01-05 00:27:07.926857 | orchestrator | 2026-01-05 00:27:07 | INFO  | Task c7c87373-ceeb-4666-8539-0e4ad6c5bddf (bootstrap) was prepared for execution. 2026-01-05 00:27:07.926990 | orchestrator | 2026-01-05 00:27:07 | INFO  | It takes a moment until task c7c87373-ceeb-4666-8539-0e4ad6c5bddf (bootstrap) has been started and output is visible here. 2026-01-05 00:27:24.864100 | orchestrator | 2026-01-05 00:27:24.864263 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-01-05 00:27:24.864293 | orchestrator | 2026-01-05 00:27:24.864313 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-01-05 00:27:24.864334 | orchestrator | Monday 05 January 2026 00:27:12 +0000 (0:00:00.152) 0:00:00.152 ******** 2026-01-05 00:27:24.864354 | orchestrator | ok: [testbed-manager] 2026-01-05 00:27:24.864375 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:27:24.864392 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:27:24.864411 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:27:24.864431 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:27:24.864450 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:27:24.864470 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:27:24.864487 | orchestrator | 2026-01-05 00:27:24.864505 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-05 00:27:24.864524 | orchestrator | 2026-01-05 00:27:24.864544 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-05 00:27:24.864564 | orchestrator | Monday 05 January 2026 00:27:12 +0000 (0:00:00.275) 0:00:00.427 ******** 2026-01-05 00:27:24.864583 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:27:24.864603 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:27:24.864623 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:27:24.864644 | orchestrator | ok: [testbed-manager] 2026-01-05 00:27:24.864664 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:27:24.864686 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:27:24.864704 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:27:24.864754 | orchestrator | 2026-01-05 00:27:24.864775 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-01-05 00:27:24.864796 | orchestrator | 2026-01-05 00:27:24.864817 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-05 00:27:24.864838 | orchestrator | Monday 05 January 2026 00:27:16 +0000 (0:00:03.922) 0:00:04.350 ******** 2026-01-05 00:27:24.864858 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-01-05 00:27:24.864880 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-01-05 00:27:24.864900 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-01-05 00:27:24.864921 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-01-05 00:27:24.864939 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 00:27:24.864958 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-01-05 00:27:24.865015 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 00:27:24.865036 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-05 00:27:24.865055 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-01-05 00:27:24.865069 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-01-05 00:27:24.865080 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-01-05 00:27:24.865091 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 00:27:24.865103 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-01-05 00:27:24.865114 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-01-05 00:27:24.865141 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-01-05 00:27:24.865152 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-01-05 00:27:24.865163 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-05 00:27:24.865186 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:27:24.865198 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-01-05 00:27:24.865208 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-01-05 00:27:24.865218 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-01-05 00:27:24.865228 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-01-05 00:27:24.865238 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-05 00:27:24.865247 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-05 00:27:24.865257 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-05 00:27:24.865267 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-05 00:27:24.865277 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-01-05 00:27:24.865286 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-01-05 00:27:24.865296 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-05 00:27:24.865306 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-05 00:27:24.865316 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:27:24.865326 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-05 00:27:24.865335 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:27:24.865345 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-05 00:27:24.865355 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:27:24.865365 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-01-05 00:27:24.865374 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-01-05 00:27:24.865385 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-01-05 00:27:24.865394 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-01-05 00:27:24.865404 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-01-05 00:27:24.865414 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-01-05 00:27:24.865424 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-05 00:27:24.865434 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-01-05 00:27:24.865443 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-05 00:27:24.865453 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-01-05 00:27:24.865463 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-01-05 00:27:24.865494 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-05 00:27:24.865505 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:27:24.865515 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-01-05 00:27:24.865524 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-01-05 00:27:24.865534 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-01-05 00:27:24.865544 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:27:24.865563 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-01-05 00:27:24.865573 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-01-05 00:27:24.865603 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-01-05 00:27:24.865613 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:27:24.865623 | orchestrator | 2026-01-05 00:27:24.865633 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-01-05 00:27:24.865643 | orchestrator | 2026-01-05 00:27:24.865653 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-01-05 00:27:24.865663 | orchestrator | Monday 05 January 2026 00:27:17 +0000 (0:00:00.489) 0:00:04.840 ******** 2026-01-05 00:27:24.865672 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:27:24.865682 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:27:24.865692 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:27:24.865701 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:27:24.865736 | orchestrator | ok: [testbed-manager] 2026-01-05 00:27:24.865754 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:27:24.865765 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:27:24.865775 | orchestrator | 2026-01-05 00:27:24.865785 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-01-05 00:27:24.865795 | orchestrator | Monday 05 January 2026 00:27:18 +0000 (0:00:01.318) 0:00:06.158 ******** 2026-01-05 00:27:24.865805 | orchestrator | ok: [testbed-manager] 2026-01-05 00:27:24.865814 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:27:24.865824 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:27:24.865833 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:27:24.865843 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:27:24.865852 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:27:24.865862 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:27:24.865871 | orchestrator | 2026-01-05 00:27:24.865881 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-01-05 00:27:24.865891 | orchestrator | Monday 05 January 2026 00:27:19 +0000 (0:00:01.417) 0:00:07.576 ******** 2026-01-05 00:27:24.865902 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:27:24.865915 | orchestrator | 2026-01-05 00:27:24.865925 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-01-05 00:27:24.865935 | orchestrator | Monday 05 January 2026 00:27:20 +0000 (0:00:00.318) 0:00:07.894 ******** 2026-01-05 00:27:24.865945 | orchestrator | changed: [testbed-manager] 2026-01-05 00:27:24.865955 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:27:24.865965 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:27:24.865975 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:27:24.865984 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:27:24.865994 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:27:24.866004 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:27:24.866078 | orchestrator | 2026-01-05 00:27:24.866092 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-01-05 00:27:24.866102 | orchestrator | Monday 05 January 2026 00:27:22 +0000 (0:00:02.099) 0:00:09.993 ******** 2026-01-05 00:27:24.866112 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:27:24.866123 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:27:24.866134 | orchestrator | 2026-01-05 00:27:24.866144 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-01-05 00:27:24.866154 | orchestrator | Monday 05 January 2026 00:27:22 +0000 (0:00:00.297) 0:00:10.291 ******** 2026-01-05 00:27:24.866164 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:27:24.866174 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:27:24.866184 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:27:24.866212 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:27:24.866222 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:27:24.866232 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:27:24.866242 | orchestrator | 2026-01-05 00:27:24.866252 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-01-05 00:27:24.866262 | orchestrator | Monday 05 January 2026 00:27:23 +0000 (0:00:01.068) 0:00:11.359 ******** 2026-01-05 00:27:24.866272 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:27:24.866281 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:27:24.866291 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:27:24.866301 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:27:24.866311 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:27:24.866320 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:27:24.866330 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:27:24.866340 | orchestrator | 2026-01-05 00:27:24.866355 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-01-05 00:27:24.866365 | orchestrator | Monday 05 January 2026 00:27:24 +0000 (0:00:00.682) 0:00:12.041 ******** 2026-01-05 00:27:24.866375 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:27:24.866384 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:27:24.866394 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:27:24.866404 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:27:24.866475 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:27:24.866486 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:27:24.866495 | orchestrator | ok: [testbed-manager] 2026-01-05 00:27:24.866505 | orchestrator | 2026-01-05 00:27:24.866515 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-01-05 00:27:24.866526 | orchestrator | Monday 05 January 2026 00:27:24 +0000 (0:00:00.462) 0:00:12.504 ******** 2026-01-05 00:27:24.866536 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:27:24.866546 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:27:24.866566 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:27:37.534324 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:27:37.534465 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:27:37.534483 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:27:37.534496 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:27:37.534508 | orchestrator | 2026-01-05 00:27:37.534521 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-01-05 00:27:37.534535 | orchestrator | Monday 05 January 2026 00:27:24 +0000 (0:00:00.264) 0:00:12.769 ******** 2026-01-05 00:27:37.534549 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:27:37.534581 | orchestrator | 2026-01-05 00:27:37.534593 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-01-05 00:27:37.534606 | orchestrator | Monday 05 January 2026 00:27:25 +0000 (0:00:00.327) 0:00:13.096 ******** 2026-01-05 00:27:37.534617 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:27:37.534629 | orchestrator | 2026-01-05 00:27:37.534640 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-01-05 00:27:37.534651 | orchestrator | Monday 05 January 2026 00:27:25 +0000 (0:00:00.331) 0:00:13.427 ******** 2026-01-05 00:27:37.534663 | orchestrator | ok: [testbed-manager] 2026-01-05 00:27:37.534676 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:27:37.534688 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:27:37.534699 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:27:37.534710 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:27:37.534756 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:27:37.534769 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:27:37.534812 | orchestrator | 2026-01-05 00:27:37.534823 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-01-05 00:27:37.534835 | orchestrator | Monday 05 January 2026 00:27:27 +0000 (0:00:01.620) 0:00:15.048 ******** 2026-01-05 00:27:37.534845 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:27:37.534856 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:27:37.534867 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:27:37.534878 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:27:37.534889 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:27:37.534900 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:27:37.534911 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:27:37.534922 | orchestrator | 2026-01-05 00:27:37.534933 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-01-05 00:27:37.534945 | orchestrator | Monday 05 January 2026 00:27:27 +0000 (0:00:00.235) 0:00:15.284 ******** 2026-01-05 00:27:37.534956 | orchestrator | ok: [testbed-manager] 2026-01-05 00:27:37.534967 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:27:37.534978 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:27:37.534989 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:27:37.535000 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:27:37.535011 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:27:37.535021 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:27:37.535032 | orchestrator | 2026-01-05 00:27:37.535043 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-01-05 00:27:37.535054 | orchestrator | Monday 05 January 2026 00:27:28 +0000 (0:00:00.585) 0:00:15.869 ******** 2026-01-05 00:27:37.535065 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:27:37.535076 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:27:37.535087 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:27:37.535099 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:27:37.535110 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:27:37.535121 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:27:37.535132 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:27:37.535143 | orchestrator | 2026-01-05 00:27:37.535155 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-01-05 00:27:37.535167 | orchestrator | Monday 05 January 2026 00:27:28 +0000 (0:00:00.394) 0:00:16.264 ******** 2026-01-05 00:27:37.535178 | orchestrator | ok: [testbed-manager] 2026-01-05 00:27:37.535190 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:27:37.535201 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:27:37.535212 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:27:37.535223 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:27:37.535233 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:27:37.535244 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:27:37.535255 | orchestrator | 2026-01-05 00:27:37.535266 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-01-05 00:27:37.535277 | orchestrator | Monday 05 January 2026 00:27:29 +0000 (0:00:00.579) 0:00:16.844 ******** 2026-01-05 00:27:37.535289 | orchestrator | ok: [testbed-manager] 2026-01-05 00:27:37.535300 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:27:37.535311 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:27:37.535322 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:27:37.535333 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:27:37.535344 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:27:37.535364 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:27:37.535375 | orchestrator | 2026-01-05 00:27:37.535387 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-01-05 00:27:37.535398 | orchestrator | Monday 05 January 2026 00:27:30 +0000 (0:00:01.174) 0:00:18.018 ******** 2026-01-05 00:27:37.535409 | orchestrator | ok: [testbed-manager] 2026-01-05 00:27:37.535420 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:27:37.535430 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:27:37.535441 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:27:37.535452 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:27:37.535470 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:27:37.535481 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:27:37.535492 | orchestrator | 2026-01-05 00:27:37.535503 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-01-05 00:27:37.535514 | orchestrator | Monday 05 January 2026 00:27:31 +0000 (0:00:01.043) 0:00:19.062 ******** 2026-01-05 00:27:37.535546 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:27:37.535558 | orchestrator | 2026-01-05 00:27:37.535569 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-01-05 00:27:37.535580 | orchestrator | Monday 05 January 2026 00:27:31 +0000 (0:00:00.308) 0:00:19.370 ******** 2026-01-05 00:27:37.535591 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:27:37.535602 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:27:37.535612 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:27:37.535623 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:27:37.535634 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:27:37.535670 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:27:37.535681 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:27:37.535692 | orchestrator | 2026-01-05 00:27:37.535703 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-05 00:27:37.535715 | orchestrator | Monday 05 January 2026 00:27:32 +0000 (0:00:01.310) 0:00:20.681 ******** 2026-01-05 00:27:37.535742 | orchestrator | ok: [testbed-manager] 2026-01-05 00:27:37.535753 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:27:37.535764 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:27:37.535775 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:27:37.535786 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:27:37.535797 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:27:37.535808 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:27:37.535818 | orchestrator | 2026-01-05 00:27:37.535829 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-05 00:27:37.535840 | orchestrator | Monday 05 January 2026 00:27:33 +0000 (0:00:00.251) 0:00:20.933 ******** 2026-01-05 00:27:37.535851 | orchestrator | ok: [testbed-manager] 2026-01-05 00:27:37.535862 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:27:37.535872 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:27:37.535883 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:27:37.535894 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:27:37.535905 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:27:37.535915 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:27:37.535926 | orchestrator | 2026-01-05 00:27:37.535937 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-05 00:27:37.535948 | orchestrator | Monday 05 January 2026 00:27:33 +0000 (0:00:00.236) 0:00:21.169 ******** 2026-01-05 00:27:37.535959 | orchestrator | ok: [testbed-manager] 2026-01-05 00:27:37.535969 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:27:37.535980 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:27:37.535991 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:27:37.536001 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:27:37.536012 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:27:37.536023 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:27:37.536033 | orchestrator | 2026-01-05 00:27:37.536044 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-05 00:27:37.536055 | orchestrator | Monday 05 January 2026 00:27:33 +0000 (0:00:00.223) 0:00:21.393 ******** 2026-01-05 00:27:37.536067 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:27:37.536080 | orchestrator | 2026-01-05 00:27:37.536091 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-05 00:27:37.536109 | orchestrator | Monday 05 January 2026 00:27:33 +0000 (0:00:00.310) 0:00:21.704 ******** 2026-01-05 00:27:37.536120 | orchestrator | ok: [testbed-manager] 2026-01-05 00:27:37.536131 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:27:37.536143 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:27:37.536153 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:27:37.536164 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:27:37.536175 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:27:37.536186 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:27:37.536197 | orchestrator | 2026-01-05 00:27:37.536207 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-05 00:27:37.536218 | orchestrator | Monday 05 January 2026 00:27:34 +0000 (0:00:00.551) 0:00:22.255 ******** 2026-01-05 00:27:37.536229 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:27:37.536240 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:27:37.536251 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:27:37.536262 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:27:37.536273 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:27:37.536284 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:27:37.536294 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:27:37.536305 | orchestrator | 2026-01-05 00:27:37.536316 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-05 00:27:37.536327 | orchestrator | Monday 05 January 2026 00:27:34 +0000 (0:00:00.236) 0:00:22.491 ******** 2026-01-05 00:27:37.536338 | orchestrator | ok: [testbed-manager] 2026-01-05 00:27:37.536349 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:27:37.536360 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:27:37.536370 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:27:37.536381 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:27:37.536392 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:27:37.536403 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:27:37.536414 | orchestrator | 2026-01-05 00:27:37.536425 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-05 00:27:37.536436 | orchestrator | Monday 05 January 2026 00:27:35 +0000 (0:00:01.144) 0:00:23.635 ******** 2026-01-05 00:27:37.536447 | orchestrator | ok: [testbed-manager] 2026-01-05 00:27:37.536458 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:27:37.536468 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:27:37.536480 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:27:37.536491 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:27:37.536501 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:27:37.536512 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:27:37.536523 | orchestrator | 2026-01-05 00:27:37.536534 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-05 00:27:37.536545 | orchestrator | Monday 05 January 2026 00:27:36 +0000 (0:00:00.561) 0:00:24.197 ******** 2026-01-05 00:27:37.536556 | orchestrator | ok: [testbed-manager] 2026-01-05 00:27:37.536567 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:27:37.536586 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:27:37.536597 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:27:37.536616 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:28:19.472979 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:28:19.473148 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:28:19.473178 | orchestrator | 2026-01-05 00:28:19.473201 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-05 00:28:19.473223 | orchestrator | Monday 05 January 2026 00:27:37 +0000 (0:00:01.127) 0:00:25.325 ******** 2026-01-05 00:28:19.473243 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:28:19.473262 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:28:19.473281 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:28:19.473299 | orchestrator | changed: [testbed-manager] 2026-01-05 00:28:19.473316 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:28:19.473354 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:28:19.473373 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:28:19.473390 | orchestrator | 2026-01-05 00:28:19.473409 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-01-05 00:28:19.473463 | orchestrator | Monday 05 January 2026 00:27:53 +0000 (0:00:16.167) 0:00:41.492 ******** 2026-01-05 00:28:19.473482 | orchestrator | ok: [testbed-manager] 2026-01-05 00:28:19.473500 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:28:19.473518 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:28:19.473536 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:28:19.473554 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:28:19.473572 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:28:19.473591 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:28:19.473610 | orchestrator | 2026-01-05 00:28:19.473628 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-01-05 00:28:19.473647 | orchestrator | Monday 05 January 2026 00:27:53 +0000 (0:00:00.255) 0:00:41.748 ******** 2026-01-05 00:28:19.473665 | orchestrator | ok: [testbed-manager] 2026-01-05 00:28:19.473684 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:28:19.473703 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:28:19.473721 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:28:19.473740 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:28:19.473790 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:28:19.473810 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:28:19.473829 | orchestrator | 2026-01-05 00:28:19.473847 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-01-05 00:28:19.473866 | orchestrator | Monday 05 January 2026 00:27:54 +0000 (0:00:00.229) 0:00:41.977 ******** 2026-01-05 00:28:19.473885 | orchestrator | ok: [testbed-manager] 2026-01-05 00:28:19.473904 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:28:19.473922 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:28:19.473942 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:28:19.473961 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:28:19.473978 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:28:19.473997 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:28:19.474014 | orchestrator | 2026-01-05 00:28:19.474120 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-01-05 00:28:19.474138 | orchestrator | Monday 05 January 2026 00:27:54 +0000 (0:00:00.281) 0:00:42.259 ******** 2026-01-05 00:28:19.474159 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:28:19.474182 | orchestrator | 2026-01-05 00:28:19.474201 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-01-05 00:28:19.474219 | orchestrator | Monday 05 January 2026 00:27:54 +0000 (0:00:00.321) 0:00:42.580 ******** 2026-01-05 00:28:19.474238 | orchestrator | ok: [testbed-manager] 2026-01-05 00:28:19.474255 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:28:19.474273 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:28:19.474290 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:28:19.474309 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:28:19.474327 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:28:19.474344 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:28:19.474362 | orchestrator | 2026-01-05 00:28:19.474379 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-01-05 00:28:19.474396 | orchestrator | Monday 05 January 2026 00:27:56 +0000 (0:00:01.990) 0:00:44.570 ******** 2026-01-05 00:28:19.474414 | orchestrator | changed: [testbed-manager] 2026-01-05 00:28:19.474432 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:28:19.474451 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:28:19.474470 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:28:19.474488 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:28:19.474507 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:28:19.474526 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:28:19.474545 | orchestrator | 2026-01-05 00:28:19.474565 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-01-05 00:28:19.474584 | orchestrator | Monday 05 January 2026 00:27:57 +0000 (0:00:01.131) 0:00:45.702 ******** 2026-01-05 00:28:19.474623 | orchestrator | ok: [testbed-manager] 2026-01-05 00:28:19.474642 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:28:19.474661 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:28:19.474681 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:28:19.474700 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:28:19.474720 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:28:19.474739 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:28:19.474789 | orchestrator | 2026-01-05 00:28:19.474809 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-01-05 00:28:19.474828 | orchestrator | Monday 05 January 2026 00:27:58 +0000 (0:00:00.847) 0:00:46.549 ******** 2026-01-05 00:28:19.474873 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:28:19.474897 | orchestrator | 2026-01-05 00:28:19.474918 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-01-05 00:28:19.474940 | orchestrator | Monday 05 January 2026 00:27:59 +0000 (0:00:00.311) 0:00:46.860 ******** 2026-01-05 00:28:19.474960 | orchestrator | changed: [testbed-manager] 2026-01-05 00:28:19.474981 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:28:19.475001 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:28:19.475021 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:28:19.475042 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:28:19.475061 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:28:19.475082 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:28:19.475102 | orchestrator | 2026-01-05 00:28:19.475155 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-01-05 00:28:19.475178 | orchestrator | Monday 05 January 2026 00:28:00 +0000 (0:00:01.192) 0:00:48.053 ******** 2026-01-05 00:28:19.475198 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:28:19.475217 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:28:19.475234 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:28:19.475252 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:28:19.475269 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:28:19.475287 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:28:19.475305 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:28:19.475323 | orchestrator | 2026-01-05 00:28:19.475340 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-01-05 00:28:19.475358 | orchestrator | Monday 05 January 2026 00:28:00 +0000 (0:00:00.247) 0:00:48.301 ******** 2026-01-05 00:28:19.475377 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:28:19.475396 | orchestrator | 2026-01-05 00:28:19.475413 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-01-05 00:28:19.475432 | orchestrator | Monday 05 January 2026 00:28:00 +0000 (0:00:00.360) 0:00:48.661 ******** 2026-01-05 00:28:19.475450 | orchestrator | ok: [testbed-manager] 2026-01-05 00:28:19.475468 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:28:19.475487 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:28:19.475504 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:28:19.475523 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:28:19.475539 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:28:19.475556 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:28:19.475573 | orchestrator | 2026-01-05 00:28:19.475591 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-01-05 00:28:19.475609 | orchestrator | Monday 05 January 2026 00:28:02 +0000 (0:00:01.869) 0:00:50.531 ******** 2026-01-05 00:28:19.475627 | orchestrator | changed: [testbed-manager] 2026-01-05 00:28:19.475645 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:28:19.475664 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:28:19.475683 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:28:19.475721 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:28:19.475739 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:28:19.475789 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:28:19.475807 | orchestrator | 2026-01-05 00:28:19.475826 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-01-05 00:28:19.475844 | orchestrator | Monday 05 January 2026 00:28:03 +0000 (0:00:01.191) 0:00:51.723 ******** 2026-01-05 00:28:19.475863 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:28:19.475883 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:28:19.475902 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:28:19.475922 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:28:19.475941 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:28:19.475960 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:28:19.475979 | orchestrator | changed: [testbed-manager] 2026-01-05 00:28:19.475998 | orchestrator | 2026-01-05 00:28:19.476018 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-01-05 00:28:19.476036 | orchestrator | Monday 05 January 2026 00:28:16 +0000 (0:00:13.008) 0:01:04.731 ******** 2026-01-05 00:28:19.476055 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:28:19.476075 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:28:19.476094 | orchestrator | ok: [testbed-manager] 2026-01-05 00:28:19.476113 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:28:19.476133 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:28:19.476151 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:28:19.476169 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:28:19.476189 | orchestrator | 2026-01-05 00:28:19.476208 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-01-05 00:28:19.476226 | orchestrator | Monday 05 January 2026 00:28:17 +0000 (0:00:00.730) 0:01:05.462 ******** 2026-01-05 00:28:19.476244 | orchestrator | ok: [testbed-manager] 2026-01-05 00:28:19.476261 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:28:19.476279 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:28:19.476296 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:28:19.476314 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:28:19.476333 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:28:19.476350 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:28:19.476367 | orchestrator | 2026-01-05 00:28:19.476384 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-01-05 00:28:19.476401 | orchestrator | Monday 05 January 2026 00:28:18 +0000 (0:00:00.969) 0:01:06.432 ******** 2026-01-05 00:28:19.476417 | orchestrator | ok: [testbed-manager] 2026-01-05 00:28:19.476434 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:28:19.476445 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:28:19.476455 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:28:19.476464 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:28:19.476474 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:28:19.476483 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:28:19.476493 | orchestrator | 2026-01-05 00:28:19.476503 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-01-05 00:28:19.476513 | orchestrator | Monday 05 January 2026 00:28:18 +0000 (0:00:00.264) 0:01:06.696 ******** 2026-01-05 00:28:19.476523 | orchestrator | ok: [testbed-manager] 2026-01-05 00:28:19.476544 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:28:19.476554 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:28:19.476563 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:28:19.476573 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:28:19.476582 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:28:19.476592 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:28:19.476601 | orchestrator | 2026-01-05 00:28:19.476611 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-01-05 00:28:19.476621 | orchestrator | Monday 05 January 2026 00:28:19 +0000 (0:00:00.267) 0:01:06.963 ******** 2026-01-05 00:28:19.476632 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:28:19.476653 | orchestrator | 2026-01-05 00:28:19.476679 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-01-05 00:30:38.983969 | orchestrator | Monday 05 January 2026 00:28:19 +0000 (0:00:00.295) 0:01:07.259 ******** 2026-01-05 00:30:38.984122 | orchestrator | ok: [testbed-manager] 2026-01-05 00:30:38.984145 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:30:38.984164 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:30:38.984181 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:30:38.984200 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:30:38.984220 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:30:38.984238 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:30:38.984255 | orchestrator | 2026-01-05 00:30:38.984267 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-01-05 00:30:38.984279 | orchestrator | Monday 05 January 2026 00:28:21 +0000 (0:00:01.807) 0:01:09.066 ******** 2026-01-05 00:30:38.984290 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:30:38.984303 | orchestrator | changed: [testbed-manager] 2026-01-05 00:30:38.984314 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:30:38.984325 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:30:38.984336 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:30:38.984346 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:30:38.984357 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:30:38.984368 | orchestrator | 2026-01-05 00:30:38.984379 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-01-05 00:30:38.984391 | orchestrator | Monday 05 January 2026 00:28:21 +0000 (0:00:00.668) 0:01:09.735 ******** 2026-01-05 00:30:38.984402 | orchestrator | ok: [testbed-manager] 2026-01-05 00:30:38.984415 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:30:38.984428 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:30:38.984440 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:30:38.984453 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:30:38.984466 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:30:38.984478 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:30:38.984490 | orchestrator | 2026-01-05 00:30:38.984503 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-01-05 00:30:38.984517 | orchestrator | Monday 05 January 2026 00:28:22 +0000 (0:00:00.226) 0:01:09.961 ******** 2026-01-05 00:30:38.984529 | orchestrator | ok: [testbed-manager] 2026-01-05 00:30:38.984542 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:30:38.984555 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:30:38.984567 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:30:38.984580 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:30:38.984592 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:30:38.984605 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:30:38.984617 | orchestrator | 2026-01-05 00:30:38.984629 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-01-05 00:30:38.984642 | orchestrator | Monday 05 January 2026 00:28:23 +0000 (0:00:01.275) 0:01:11.236 ******** 2026-01-05 00:30:38.984655 | orchestrator | changed: [testbed-manager] 2026-01-05 00:30:38.984669 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:30:38.984681 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:30:38.984693 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:30:38.984706 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:30:38.984723 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:30:38.984736 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:30:38.984769 | orchestrator | 2026-01-05 00:30:38.984780 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-01-05 00:30:38.984802 | orchestrator | Monday 05 January 2026 00:28:25 +0000 (0:00:01.915) 0:01:13.152 ******** 2026-01-05 00:30:38.984813 | orchestrator | ok: [testbed-manager] 2026-01-05 00:30:38.984825 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:30:38.984836 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:30:38.984868 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:30:38.984879 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:30:38.984917 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:30:38.984928 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:30:38.984939 | orchestrator | 2026-01-05 00:30:38.984951 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-01-05 00:30:38.984962 | orchestrator | Monday 05 January 2026 00:28:28 +0000 (0:00:02.657) 0:01:15.810 ******** 2026-01-05 00:30:38.984973 | orchestrator | ok: [testbed-manager] 2026-01-05 00:30:38.984984 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:30:38.985001 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:30:38.985020 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:30:38.985038 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:30:38.985056 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:30:38.985075 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:30:38.985095 | orchestrator | 2026-01-05 00:30:38.985113 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-01-05 00:30:38.985130 | orchestrator | Monday 05 January 2026 00:29:03 +0000 (0:00:35.875) 0:01:51.685 ******** 2026-01-05 00:30:38.985141 | orchestrator | changed: [testbed-manager] 2026-01-05 00:30:38.985153 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:30:38.985163 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:30:38.985174 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:30:38.985185 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:30:38.985196 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:30:38.985207 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:30:38.985218 | orchestrator | 2026-01-05 00:30:38.985229 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-01-05 00:30:38.985240 | orchestrator | Monday 05 January 2026 00:30:20 +0000 (0:01:16.767) 0:03:08.452 ******** 2026-01-05 00:30:38.985251 | orchestrator | ok: [testbed-manager] 2026-01-05 00:30:38.985262 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:30:38.985272 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:30:38.985283 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:30:38.985294 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:30:38.985305 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:30:38.985315 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:30:38.985326 | orchestrator | 2026-01-05 00:30:38.985337 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-01-05 00:30:38.985348 | orchestrator | Monday 05 January 2026 00:30:22 +0000 (0:00:02.105) 0:03:10.557 ******** 2026-01-05 00:30:38.985359 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:30:38.985370 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:30:38.985381 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:30:38.985391 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:30:38.985402 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:30:38.985412 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:30:38.985423 | orchestrator | changed: [testbed-manager] 2026-01-05 00:30:38.985434 | orchestrator | 2026-01-05 00:30:38.985445 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-01-05 00:30:38.985456 | orchestrator | Monday 05 January 2026 00:30:36 +0000 (0:00:13.880) 0:03:24.438 ******** 2026-01-05 00:30:38.985511 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-01-05 00:30:38.985548 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-01-05 00:30:38.985573 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-01-05 00:30:38.985586 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-01-05 00:30:38.985598 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-01-05 00:30:38.985609 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-01-05 00:30:38.985626 | orchestrator | 2026-01-05 00:30:38.985644 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-01-05 00:30:38.985662 | orchestrator | Monday 05 January 2026 00:30:37 +0000 (0:00:00.461) 0:03:24.899 ******** 2026-01-05 00:30:38.985682 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-05 00:30:38.985702 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-05 00:30:38.985720 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:30:38.985738 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:30:38.985756 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-05 00:30:38.985774 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-05 00:30:38.985792 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:30:38.985811 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:30:38.985829 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-05 00:30:38.985874 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-05 00:30:38.985896 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-05 00:30:38.985907 | orchestrator | 2026-01-05 00:30:38.985924 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-01-05 00:30:38.985936 | orchestrator | Monday 05 January 2026 00:30:38 +0000 (0:00:01.765) 0:03:26.665 ******** 2026-01-05 00:30:38.985947 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-05 00:30:38.985959 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-05 00:30:38.985970 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-05 00:30:38.985981 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-05 00:30:38.985991 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-05 00:30:38.986010 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-05 00:30:47.926447 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-05 00:30:47.926604 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-05 00:30:47.926621 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-05 00:30:47.926633 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-05 00:30:47.926645 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-05 00:30:47.926657 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-05 00:30:47.926668 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-05 00:30:47.926679 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-05 00:30:47.926690 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-05 00:30:47.926701 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-05 00:30:47.926712 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-05 00:30:47.926724 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-05 00:30:47.926735 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-05 00:30:47.926746 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-05 00:30:47.926757 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:30:47.926770 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:30:47.926781 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-05 00:30:47.926793 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-05 00:30:47.926804 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-05 00:30:47.926815 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-05 00:30:47.926826 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-05 00:30:47.926837 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-05 00:30:47.926876 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-05 00:30:47.926888 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-05 00:30:47.926899 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-05 00:30:47.926910 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-05 00:30:47.926921 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-05 00:30:47.926932 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-05 00:30:47.926943 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-05 00:30:47.926954 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-05 00:30:47.926965 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-05 00:30:47.926976 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-05 00:30:47.926988 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-05 00:30:47.926999 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-05 00:30:47.927017 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-05 00:30:47.927044 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-05 00:30:47.927056 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:30:47.927068 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:30:47.927079 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-05 00:30:47.927090 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-05 00:30:47.927101 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-05 00:30:47.927112 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-05 00:30:47.927123 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-05 00:30:47.927151 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-05 00:30:47.927163 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-05 00:30:47.927174 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-05 00:30:47.927185 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-05 00:30:47.927196 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-05 00:30:47.927207 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-05 00:30:47.927218 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-05 00:30:47.927229 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-05 00:30:47.927239 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-05 00:30:47.927250 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-05 00:30:47.927261 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-05 00:30:47.927272 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-05 00:30:47.927283 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-05 00:30:47.927294 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-05 00:30:47.927305 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-05 00:30:47.927316 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-05 00:30:47.927327 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-05 00:30:47.927338 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-05 00:30:47.927349 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-05 00:30:47.927360 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-05 00:30:47.927371 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-05 00:30:47.927383 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-05 00:30:47.927394 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-05 00:30:47.927405 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-05 00:30:47.927423 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-05 00:30:47.927442 | orchestrator | 2026-01-05 00:30:47.927461 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-01-05 00:30:47.927481 | orchestrator | Monday 05 January 2026 00:30:44 +0000 (0:00:06.038) 0:03:32.704 ******** 2026-01-05 00:30:47.927499 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-05 00:30:47.927517 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-05 00:30:47.927534 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-05 00:30:47.927551 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-05 00:30:47.927568 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-05 00:30:47.927585 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-05 00:30:47.927604 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-05 00:30:47.927623 | orchestrator | 2026-01-05 00:30:47.927643 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-01-05 00:30:47.927660 | orchestrator | Monday 05 January 2026 00:30:46 +0000 (0:00:01.551) 0:03:34.255 ******** 2026-01-05 00:30:47.927685 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-05 00:30:47.927697 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:30:47.927709 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-05 00:30:47.927720 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:30:47.927731 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-05 00:30:47.927742 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:30:47.927753 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-05 00:30:47.927764 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:30:47.927775 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-05 00:30:47.927786 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-05 00:30:47.927806 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-05 00:31:01.910308 | orchestrator | 2026-01-05 00:31:01.910375 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-01-05 00:31:01.910381 | orchestrator | Monday 05 January 2026 00:30:47 +0000 (0:00:01.459) 0:03:35.714 ******** 2026-01-05 00:31:01.910385 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-05 00:31:01.910390 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:31:01.910395 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-05 00:31:01.910399 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-05 00:31:01.910403 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:31:01.910407 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:31:01.910411 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-05 00:31:01.910415 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:31:01.910419 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-05 00:31:01.910423 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-05 00:31:01.910427 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-05 00:31:01.910443 | orchestrator | 2026-01-05 00:31:01.910447 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-01-05 00:31:01.910451 | orchestrator | Monday 05 January 2026 00:30:48 +0000 (0:00:00.603) 0:03:36.317 ******** 2026-01-05 00:31:01.910455 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-05 00:31:01.910458 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:31:01.910462 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-05 00:31:01.910466 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-05 00:31:01.910470 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:31:01.910474 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:31:01.910478 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-05 00:31:01.910482 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:31:01.910486 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-05 00:31:01.910490 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-05 00:31:01.910494 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-05 00:31:01.910497 | orchestrator | 2026-01-05 00:31:01.910501 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-01-05 00:31:01.910505 | orchestrator | Monday 05 January 2026 00:30:49 +0000 (0:00:00.608) 0:03:36.926 ******** 2026-01-05 00:31:01.910509 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:31:01.910512 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:31:01.910516 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:31:01.910520 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:31:01.910524 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:31:01.910528 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:31:01.910531 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:31:01.910535 | orchestrator | 2026-01-05 00:31:01.910539 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-01-05 00:31:01.910543 | orchestrator | Monday 05 January 2026 00:30:49 +0000 (0:00:00.305) 0:03:37.232 ******** 2026-01-05 00:31:01.910547 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:31:01.910551 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:31:01.910555 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:31:01.910558 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:31:01.910562 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:31:01.910566 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:31:01.910570 | orchestrator | ok: [testbed-manager] 2026-01-05 00:31:01.910573 | orchestrator | 2026-01-05 00:31:01.910577 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-01-05 00:31:01.910581 | orchestrator | Monday 05 January 2026 00:30:55 +0000 (0:00:05.664) 0:03:42.896 ******** 2026-01-05 00:31:01.910585 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-01-05 00:31:01.910589 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-01-05 00:31:01.910592 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:31:01.910596 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:31:01.910600 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-01-05 00:31:01.910604 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-01-05 00:31:01.910608 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:31:01.910612 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:31:01.910616 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-01-05 00:31:01.910620 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-01-05 00:31:01.910633 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:31:01.910638 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:31:01.910645 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-01-05 00:31:01.910649 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:31:01.910653 | orchestrator | 2026-01-05 00:31:01.910656 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-01-05 00:31:01.910660 | orchestrator | Monday 05 January 2026 00:30:55 +0000 (0:00:00.330) 0:03:43.227 ******** 2026-01-05 00:31:01.910664 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-01-05 00:31:01.910668 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-01-05 00:31:01.910672 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-01-05 00:31:01.910684 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-01-05 00:31:01.910688 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-01-05 00:31:01.910692 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-01-05 00:31:01.910695 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-01-05 00:31:01.910699 | orchestrator | 2026-01-05 00:31:01.910703 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-01-05 00:31:01.910707 | orchestrator | Monday 05 January 2026 00:30:56 +0000 (0:00:01.313) 0:03:44.540 ******** 2026-01-05 00:31:01.910711 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:31:01.910716 | orchestrator | 2026-01-05 00:31:01.910720 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-01-05 00:31:01.910724 | orchestrator | Monday 05 January 2026 00:30:57 +0000 (0:00:00.562) 0:03:45.103 ******** 2026-01-05 00:31:01.910728 | orchestrator | ok: [testbed-manager] 2026-01-05 00:31:01.910732 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:31:01.910735 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:31:01.910739 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:31:01.910743 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:31:01.910747 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:31:01.910750 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:31:01.910754 | orchestrator | 2026-01-05 00:31:01.910758 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-01-05 00:31:01.910762 | orchestrator | Monday 05 January 2026 00:30:58 +0000 (0:00:01.412) 0:03:46.515 ******** 2026-01-05 00:31:01.910765 | orchestrator | ok: [testbed-manager] 2026-01-05 00:31:01.910769 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:31:01.910773 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:31:01.910777 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:31:01.910780 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:31:01.910784 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:31:01.910788 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:31:01.910792 | orchestrator | 2026-01-05 00:31:01.910795 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-01-05 00:31:01.910799 | orchestrator | Monday 05 January 2026 00:30:59 +0000 (0:00:00.699) 0:03:47.214 ******** 2026-01-05 00:31:01.910803 | orchestrator | changed: [testbed-manager] 2026-01-05 00:31:01.910807 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:31:01.910811 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:31:01.910814 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:31:01.910818 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:31:01.910822 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:31:01.910826 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:31:01.910830 | orchestrator | 2026-01-05 00:31:01.910834 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-01-05 00:31:01.910837 | orchestrator | Monday 05 January 2026 00:31:00 +0000 (0:00:00.735) 0:03:47.949 ******** 2026-01-05 00:31:01.910841 | orchestrator | ok: [testbed-manager] 2026-01-05 00:31:01.910845 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:31:01.910849 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:31:01.910890 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:31:01.910896 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:31:01.910905 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:31:01.910909 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:31:01.910914 | orchestrator | 2026-01-05 00:31:01.910918 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-01-05 00:31:01.910922 | orchestrator | Monday 05 January 2026 00:31:00 +0000 (0:00:00.735) 0:03:48.685 ******** 2026-01-05 00:31:01.910929 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767571515.2098372, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 00:31:01.910938 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767571534.4690702, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 00:31:01.910942 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767571526.7185993, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 00:31:01.910957 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767571530.0329237, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 00:31:07.113401 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767571543.095088, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 00:31:07.113491 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767571538.800492, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 00:31:07.113498 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767571545.5579479, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 00:31:07.113520 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 00:31:07.113524 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 00:31:07.113542 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 00:31:07.113546 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 00:31:07.113562 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 00:31:07.113566 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 00:31:07.113570 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 00:31:07.113579 | orchestrator | 2026-01-05 00:31:07.113586 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-01-05 00:31:07.113594 | orchestrator | Monday 05 January 2026 00:31:01 +0000 (0:00:01.014) 0:03:49.699 ******** 2026-01-05 00:31:07.113600 | orchestrator | changed: [testbed-manager] 2026-01-05 00:31:07.113608 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:31:07.113614 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:31:07.113621 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:31:07.113627 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:31:07.113633 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:31:07.113640 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:31:07.113644 | orchestrator | 2026-01-05 00:31:07.113648 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-01-05 00:31:07.113651 | orchestrator | Monday 05 January 2026 00:31:03 +0000 (0:00:01.134) 0:03:50.834 ******** 2026-01-05 00:31:07.113655 | orchestrator | changed: [testbed-manager] 2026-01-05 00:31:07.113659 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:31:07.113663 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:31:07.113666 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:31:07.113670 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:31:07.113674 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:31:07.113677 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:31:07.113681 | orchestrator | 2026-01-05 00:31:07.113685 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-01-05 00:31:07.113689 | orchestrator | Monday 05 January 2026 00:31:04 +0000 (0:00:01.241) 0:03:52.075 ******** 2026-01-05 00:31:07.113692 | orchestrator | changed: [testbed-manager] 2026-01-05 00:31:07.113696 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:31:07.113700 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:31:07.113703 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:31:07.113707 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:31:07.113711 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:31:07.113715 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:31:07.113718 | orchestrator | 2026-01-05 00:31:07.113727 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-01-05 00:31:07.113731 | orchestrator | Monday 05 January 2026 00:31:05 +0000 (0:00:01.291) 0:03:53.367 ******** 2026-01-05 00:31:07.113737 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:31:07.113743 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:31:07.113748 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:31:07.113754 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:31:07.113760 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:31:07.113765 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:31:07.113771 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:31:07.113776 | orchestrator | 2026-01-05 00:31:07.113782 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-01-05 00:31:07.113788 | orchestrator | Monday 05 January 2026 00:31:05 +0000 (0:00:00.325) 0:03:53.692 ******** 2026-01-05 00:31:07.113794 | orchestrator | ok: [testbed-manager] 2026-01-05 00:31:07.113801 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:31:07.113807 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:31:07.113813 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:31:07.113820 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:31:07.113826 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:31:07.113832 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:31:07.113838 | orchestrator | 2026-01-05 00:31:07.113845 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-01-05 00:31:07.113855 | orchestrator | Monday 05 January 2026 00:31:06 +0000 (0:00:00.802) 0:03:54.495 ******** 2026-01-05 00:31:07.113935 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:31:07.113943 | orchestrator | 2026-01-05 00:31:07.113950 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-01-05 00:31:07.113964 | orchestrator | Monday 05 January 2026 00:31:07 +0000 (0:00:00.411) 0:03:54.907 ******** 2026-01-05 00:32:25.376251 | orchestrator | ok: [testbed-manager] 2026-01-05 00:32:25.376387 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:32:25.376405 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:32:25.376418 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:32:25.376429 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:32:25.376440 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:32:25.376452 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:32:25.376463 | orchestrator | 2026-01-05 00:32:25.376476 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-01-05 00:32:25.376489 | orchestrator | Monday 05 January 2026 00:31:15 +0000 (0:00:08.794) 0:04:03.702 ******** 2026-01-05 00:32:25.376500 | orchestrator | ok: [testbed-manager] 2026-01-05 00:32:25.376512 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:32:25.376523 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:32:25.376534 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:32:25.376545 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:32:25.376556 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:32:25.376567 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:32:25.376578 | orchestrator | 2026-01-05 00:32:25.376589 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-01-05 00:32:25.376600 | orchestrator | Monday 05 January 2026 00:31:17 +0000 (0:00:01.278) 0:04:04.980 ******** 2026-01-05 00:32:25.376611 | orchestrator | ok: [testbed-manager] 2026-01-05 00:32:25.376622 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:32:25.376633 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:32:25.376644 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:32:25.376655 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:32:25.376665 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:32:25.376676 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:32:25.376687 | orchestrator | 2026-01-05 00:32:25.376698 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-01-05 00:32:25.376710 | orchestrator | Monday 05 January 2026 00:31:19 +0000 (0:00:01.982) 0:04:06.962 ******** 2026-01-05 00:32:25.376721 | orchestrator | ok: [testbed-manager] 2026-01-05 00:32:25.376733 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:32:25.376744 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:32:25.376755 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:32:25.376766 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:32:25.376779 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:32:25.376792 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:32:25.376804 | orchestrator | 2026-01-05 00:32:25.376817 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-01-05 00:32:25.376830 | orchestrator | Monday 05 January 2026 00:31:19 +0000 (0:00:00.325) 0:04:07.288 ******** 2026-01-05 00:32:25.376843 | orchestrator | ok: [testbed-manager] 2026-01-05 00:32:25.376855 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:32:25.376869 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:32:25.376881 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:32:25.376920 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:32:25.376942 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:32:25.376956 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:32:25.376969 | orchestrator | 2026-01-05 00:32:25.376982 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-01-05 00:32:25.376995 | orchestrator | Monday 05 January 2026 00:31:19 +0000 (0:00:00.302) 0:04:07.591 ******** 2026-01-05 00:32:25.377046 | orchestrator | ok: [testbed-manager] 2026-01-05 00:32:25.377066 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:32:25.377083 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:32:25.377100 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:32:25.377119 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:32:25.377136 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:32:25.377153 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:32:25.377171 | orchestrator | 2026-01-05 00:32:25.377189 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-01-05 00:32:25.377208 | orchestrator | Monday 05 January 2026 00:31:20 +0000 (0:00:00.304) 0:04:07.895 ******** 2026-01-05 00:32:25.377228 | orchestrator | ok: [testbed-manager] 2026-01-05 00:32:25.377245 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:32:25.377263 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:32:25.377275 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:32:25.377285 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:32:25.377296 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:32:25.377306 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:32:25.377317 | orchestrator | 2026-01-05 00:32:25.377328 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-01-05 00:32:25.377339 | orchestrator | Monday 05 January 2026 00:31:25 +0000 (0:00:05.470) 0:04:13.366 ******** 2026-01-05 00:32:25.377352 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:32:25.377366 | orchestrator | 2026-01-05 00:32:25.377377 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-01-05 00:32:25.377388 | orchestrator | Monday 05 January 2026 00:31:25 +0000 (0:00:00.421) 0:04:13.787 ******** 2026-01-05 00:32:25.377399 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-01-05 00:32:25.377410 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-01-05 00:32:25.377422 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-01-05 00:32:25.377433 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:32:25.377444 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-01-05 00:32:25.377455 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:32:25.377487 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-01-05 00:32:25.377499 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-01-05 00:32:25.377510 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-01-05 00:32:25.377522 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:32:25.377532 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-01-05 00:32:25.377543 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-01-05 00:32:25.377554 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-01-05 00:32:25.377565 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:32:25.377576 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-01-05 00:32:25.377587 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-01-05 00:32:25.377619 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:32:25.377630 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:32:25.377641 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-01-05 00:32:25.377652 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-01-05 00:32:25.377663 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:32:25.377674 | orchestrator | 2026-01-05 00:32:25.377685 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-01-05 00:32:25.377696 | orchestrator | Monday 05 January 2026 00:31:26 +0000 (0:00:00.362) 0:04:14.149 ******** 2026-01-05 00:32:25.377708 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:32:25.377730 | orchestrator | 2026-01-05 00:32:25.377741 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-01-05 00:32:25.377752 | orchestrator | Monday 05 January 2026 00:31:26 +0000 (0:00:00.411) 0:04:14.561 ******** 2026-01-05 00:32:25.377762 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-01-05 00:32:25.377773 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:32:25.377790 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-01-05 00:32:25.377809 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-01-05 00:32:25.377826 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:32:25.377845 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-01-05 00:32:25.377862 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:32:25.377878 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:32:25.377953 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-01-05 00:32:25.377973 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-01-05 00:32:25.377989 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:32:25.378006 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:32:25.378109 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-01-05 00:32:25.378122 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:32:25.378133 | orchestrator | 2026-01-05 00:32:25.378144 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-01-05 00:32:25.378155 | orchestrator | Monday 05 January 2026 00:31:27 +0000 (0:00:00.334) 0:04:14.896 ******** 2026-01-05 00:32:25.378167 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:32:25.378178 | orchestrator | 2026-01-05 00:32:25.378189 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-01-05 00:32:25.378200 | orchestrator | Monday 05 January 2026 00:31:27 +0000 (0:00:00.430) 0:04:15.326 ******** 2026-01-05 00:32:25.378211 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:32:25.378222 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:32:25.378232 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:32:25.378243 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:32:25.378254 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:32:25.378265 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:32:25.378275 | orchestrator | changed: [testbed-manager] 2026-01-05 00:32:25.378286 | orchestrator | 2026-01-05 00:32:25.378297 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-01-05 00:32:25.378308 | orchestrator | Monday 05 January 2026 00:32:01 +0000 (0:00:33.886) 0:04:49.213 ******** 2026-01-05 00:32:25.378319 | orchestrator | changed: [testbed-manager] 2026-01-05 00:32:25.378329 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:32:25.378340 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:32:25.378351 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:32:25.378369 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:32:25.378381 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:32:25.378391 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:32:25.378402 | orchestrator | 2026-01-05 00:32:25.378413 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-01-05 00:32:25.378424 | orchestrator | Monday 05 January 2026 00:32:09 +0000 (0:00:08.099) 0:04:57.313 ******** 2026-01-05 00:32:25.378435 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:32:25.378446 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:32:25.378456 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:32:25.378467 | orchestrator | changed: [testbed-manager] 2026-01-05 00:32:25.378478 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:32:25.378488 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:32:25.378509 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:32:25.378520 | orchestrator | 2026-01-05 00:32:25.378531 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-01-05 00:32:25.378542 | orchestrator | Monday 05 January 2026 00:32:17 +0000 (0:00:07.576) 0:05:04.889 ******** 2026-01-05 00:32:25.378553 | orchestrator | ok: [testbed-manager] 2026-01-05 00:32:25.378564 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:32:25.378575 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:32:25.378586 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:32:25.378597 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:32:25.378607 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:32:25.378618 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:32:25.378628 | orchestrator | 2026-01-05 00:32:25.378639 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-01-05 00:32:25.378651 | orchestrator | Monday 05 January 2026 00:32:18 +0000 (0:00:01.815) 0:05:06.705 ******** 2026-01-05 00:32:25.378661 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:32:25.378672 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:32:25.378683 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:32:25.378694 | orchestrator | changed: [testbed-manager] 2026-01-05 00:32:25.378705 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:32:25.378716 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:32:25.378727 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:32:25.378738 | orchestrator | 2026-01-05 00:32:25.378761 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-01-05 00:32:37.286010 | orchestrator | Monday 05 January 2026 00:32:25 +0000 (0:00:06.453) 0:05:13.159 ******** 2026-01-05 00:32:37.286263 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:32:37.286295 | orchestrator | 2026-01-05 00:32:37.286316 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-01-05 00:32:37.286336 | orchestrator | Monday 05 January 2026 00:32:26 +0000 (0:00:00.659) 0:05:13.819 ******** 2026-01-05 00:32:37.286356 | orchestrator | changed: [testbed-manager] 2026-01-05 00:32:37.286377 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:32:37.286396 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:32:37.286415 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:32:37.286435 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:32:37.286455 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:32:37.286475 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:32:37.286492 | orchestrator | 2026-01-05 00:32:37.286505 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-01-05 00:32:37.286518 | orchestrator | Monday 05 January 2026 00:32:26 +0000 (0:00:00.769) 0:05:14.588 ******** 2026-01-05 00:32:37.286532 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:32:37.286546 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:32:37.286559 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:32:37.286572 | orchestrator | ok: [testbed-manager] 2026-01-05 00:32:37.286584 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:32:37.286596 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:32:37.286609 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:32:37.286621 | orchestrator | 2026-01-05 00:32:37.286634 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-01-05 00:32:37.286646 | orchestrator | Monday 05 January 2026 00:32:28 +0000 (0:00:01.717) 0:05:16.305 ******** 2026-01-05 00:32:37.286658 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:32:37.286676 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:32:37.286695 | orchestrator | changed: [testbed-manager] 2026-01-05 00:32:37.286714 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:32:37.286731 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:32:37.286749 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:32:37.286764 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:32:37.286781 | orchestrator | 2026-01-05 00:32:37.286836 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-01-05 00:32:37.286857 | orchestrator | Monday 05 January 2026 00:32:29 +0000 (0:00:00.809) 0:05:17.115 ******** 2026-01-05 00:32:37.286874 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:32:37.286928 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:32:37.286947 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:32:37.286964 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:32:37.286982 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:32:37.287000 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:32:37.287018 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:32:37.287037 | orchestrator | 2026-01-05 00:32:37.287054 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-01-05 00:32:37.287072 | orchestrator | Monday 05 January 2026 00:32:29 +0000 (0:00:00.292) 0:05:17.408 ******** 2026-01-05 00:32:37.287092 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:32:37.287111 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:32:37.287130 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:32:37.287148 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:32:37.287161 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:32:37.287172 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:32:37.287183 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:32:37.287194 | orchestrator | 2026-01-05 00:32:37.287205 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-01-05 00:32:37.287215 | orchestrator | Monday 05 January 2026 00:32:30 +0000 (0:00:00.449) 0:05:17.858 ******** 2026-01-05 00:32:37.287226 | orchestrator | ok: [testbed-manager] 2026-01-05 00:32:37.287237 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:32:37.287248 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:32:37.287278 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:32:37.287289 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:32:37.287303 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:32:37.287322 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:32:37.287339 | orchestrator | 2026-01-05 00:32:37.287358 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-01-05 00:32:37.287376 | orchestrator | Monday 05 January 2026 00:32:30 +0000 (0:00:00.318) 0:05:18.176 ******** 2026-01-05 00:32:37.287394 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:32:37.287412 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:32:37.287430 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:32:37.287446 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:32:37.287464 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:32:37.287484 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:32:37.287501 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:32:37.287519 | orchestrator | 2026-01-05 00:32:37.287539 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-01-05 00:32:37.287559 | orchestrator | Monday 05 January 2026 00:32:30 +0000 (0:00:00.308) 0:05:18.485 ******** 2026-01-05 00:32:37.287575 | orchestrator | ok: [testbed-manager] 2026-01-05 00:32:37.287586 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:32:37.287597 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:32:37.287608 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:32:37.287620 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:32:37.287630 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:32:37.287641 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:32:37.287652 | orchestrator | 2026-01-05 00:32:37.287663 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-01-05 00:32:37.287674 | orchestrator | Monday 05 January 2026 00:32:31 +0000 (0:00:00.334) 0:05:18.819 ******** 2026-01-05 00:32:37.287685 | orchestrator | ok: [testbed-manager] =>  2026-01-05 00:32:37.287696 | orchestrator |  docker_version: 5:27.5.1 2026-01-05 00:32:37.287706 | orchestrator | ok: [testbed-node-3] =>  2026-01-05 00:32:37.287717 | orchestrator |  docker_version: 5:27.5.1 2026-01-05 00:32:37.287728 | orchestrator | ok: [testbed-node-4] =>  2026-01-05 00:32:37.287739 | orchestrator |  docker_version: 5:27.5.1 2026-01-05 00:32:37.287763 | orchestrator | ok: [testbed-node-5] =>  2026-01-05 00:32:37.287774 | orchestrator |  docker_version: 5:27.5.1 2026-01-05 00:32:37.287810 | orchestrator | ok: [testbed-node-0] =>  2026-01-05 00:32:37.287821 | orchestrator |  docker_version: 5:27.5.1 2026-01-05 00:32:37.287832 | orchestrator | ok: [testbed-node-1] =>  2026-01-05 00:32:37.287843 | orchestrator |  docker_version: 5:27.5.1 2026-01-05 00:32:37.287854 | orchestrator | ok: [testbed-node-2] =>  2026-01-05 00:32:37.287865 | orchestrator |  docker_version: 5:27.5.1 2026-01-05 00:32:37.287882 | orchestrator | 2026-01-05 00:32:37.287931 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-01-05 00:32:37.287951 | orchestrator | Monday 05 January 2026 00:32:31 +0000 (0:00:00.328) 0:05:19.147 ******** 2026-01-05 00:32:37.287969 | orchestrator | ok: [testbed-manager] =>  2026-01-05 00:32:37.287985 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-05 00:32:37.288002 | orchestrator | ok: [testbed-node-3] =>  2026-01-05 00:32:37.288020 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-05 00:32:37.288039 | orchestrator | ok: [testbed-node-4] =>  2026-01-05 00:32:37.288057 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-05 00:32:37.288077 | orchestrator | ok: [testbed-node-5] =>  2026-01-05 00:32:37.288094 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-05 00:32:37.288112 | orchestrator | ok: [testbed-node-0] =>  2026-01-05 00:32:37.288124 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-05 00:32:37.288135 | orchestrator | ok: [testbed-node-1] =>  2026-01-05 00:32:37.288146 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-05 00:32:37.288157 | orchestrator | ok: [testbed-node-2] =>  2026-01-05 00:32:37.288167 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-05 00:32:37.288178 | orchestrator | 2026-01-05 00:32:37.288189 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-01-05 00:32:37.288200 | orchestrator | Monday 05 January 2026 00:32:31 +0000 (0:00:00.322) 0:05:19.470 ******** 2026-01-05 00:32:37.288211 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:32:37.288222 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:32:37.288232 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:32:37.288243 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:32:37.288254 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:32:37.288265 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:32:37.288276 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:32:37.288286 | orchestrator | 2026-01-05 00:32:37.288297 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-01-05 00:32:37.288308 | orchestrator | Monday 05 January 2026 00:32:31 +0000 (0:00:00.295) 0:05:19.765 ******** 2026-01-05 00:32:37.288319 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:32:37.288330 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:32:37.288341 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:32:37.288351 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:32:37.288362 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:32:37.288373 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:32:37.288384 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:32:37.288394 | orchestrator | 2026-01-05 00:32:37.288405 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-01-05 00:32:37.288416 | orchestrator | Monday 05 January 2026 00:32:32 +0000 (0:00:00.331) 0:05:20.097 ******** 2026-01-05 00:32:37.288432 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:32:37.288453 | orchestrator | 2026-01-05 00:32:37.288472 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-01-05 00:32:37.288489 | orchestrator | Monday 05 January 2026 00:32:32 +0000 (0:00:00.457) 0:05:20.555 ******** 2026-01-05 00:32:37.288505 | orchestrator | ok: [testbed-manager] 2026-01-05 00:32:37.288521 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:32:37.288550 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:32:37.288568 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:32:37.288586 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:32:37.288605 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:32:37.288623 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:32:37.288642 | orchestrator | 2026-01-05 00:32:37.288661 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-01-05 00:32:37.288690 | orchestrator | Monday 05 January 2026 00:32:33 +0000 (0:00:01.042) 0:05:21.598 ******** 2026-01-05 00:32:37.288708 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:32:37.288725 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:32:37.288744 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:32:37.288762 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:32:37.288782 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:32:37.288800 | orchestrator | ok: [testbed-manager] 2026-01-05 00:32:37.288819 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:32:37.288832 | orchestrator | 2026-01-05 00:32:37.288843 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-01-05 00:32:37.288856 | orchestrator | Monday 05 January 2026 00:32:36 +0000 (0:00:03.014) 0:05:24.613 ******** 2026-01-05 00:32:37.288867 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-01-05 00:32:37.288878 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-01-05 00:32:37.288933 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-01-05 00:32:37.288947 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-01-05 00:32:37.288958 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-01-05 00:32:37.288968 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-01-05 00:32:37.288979 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:32:37.288990 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-01-05 00:32:37.289001 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-01-05 00:32:37.289011 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:32:37.289022 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-01-05 00:32:37.289033 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-01-05 00:32:37.289043 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-01-05 00:32:37.289054 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-01-05 00:32:37.289065 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:32:37.289075 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-01-05 00:32:37.289099 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-01-05 00:33:41.394826 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-01-05 00:33:41.395066 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:33:41.395095 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-01-05 00:33:41.395113 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-01-05 00:33:41.395130 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-01-05 00:33:41.395147 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:33:41.395163 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:33:41.395180 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-01-05 00:33:41.395197 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-01-05 00:33:41.395214 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-01-05 00:33:41.395230 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:33:41.395246 | orchestrator | 2026-01-05 00:33:41.395264 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-01-05 00:33:41.395282 | orchestrator | Monday 05 January 2026 00:32:37 +0000 (0:00:00.693) 0:05:25.306 ******** 2026-01-05 00:33:41.395299 | orchestrator | ok: [testbed-manager] 2026-01-05 00:33:41.395317 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:33:41.395333 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:33:41.395372 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:33:41.395384 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:33:41.395396 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:33:41.395411 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:33:41.395427 | orchestrator | 2026-01-05 00:33:41.395445 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-01-05 00:33:41.395462 | orchestrator | Monday 05 January 2026 00:32:44 +0000 (0:00:06.605) 0:05:31.911 ******** 2026-01-05 00:33:41.395478 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:33:41.395495 | orchestrator | ok: [testbed-manager] 2026-01-05 00:33:41.395512 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:33:41.395528 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:33:41.395547 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:33:41.395564 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:33:41.395582 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:33:41.395598 | orchestrator | 2026-01-05 00:33:41.395615 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-01-05 00:33:41.395632 | orchestrator | Monday 05 January 2026 00:32:45 +0000 (0:00:01.086) 0:05:32.998 ******** 2026-01-05 00:33:41.395649 | orchestrator | ok: [testbed-manager] 2026-01-05 00:33:41.395667 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:33:41.395684 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:33:41.395700 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:33:41.395716 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:33:41.395732 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:33:41.395748 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:33:41.395764 | orchestrator | 2026-01-05 00:33:41.395781 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-01-05 00:33:41.395798 | orchestrator | Monday 05 January 2026 00:32:53 +0000 (0:00:08.734) 0:05:41.733 ******** 2026-01-05 00:33:41.395815 | orchestrator | changed: [testbed-manager] 2026-01-05 00:33:41.395831 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:33:41.395848 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:33:41.395864 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:33:41.395903 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:33:41.395920 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:33:41.395936 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:33:41.395953 | orchestrator | 2026-01-05 00:33:41.395969 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-01-05 00:33:41.395986 | orchestrator | Monday 05 January 2026 00:32:57 +0000 (0:00:03.478) 0:05:45.212 ******** 2026-01-05 00:33:41.396003 | orchestrator | ok: [testbed-manager] 2026-01-05 00:33:41.396019 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:33:41.396036 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:33:41.396053 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:33:41.396069 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:33:41.396086 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:33:41.396102 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:33:41.396118 | orchestrator | 2026-01-05 00:33:41.396134 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-01-05 00:33:41.396150 | orchestrator | Monday 05 January 2026 00:32:58 +0000 (0:00:01.409) 0:05:46.621 ******** 2026-01-05 00:33:41.396167 | orchestrator | ok: [testbed-manager] 2026-01-05 00:33:41.396183 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:33:41.396200 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:33:41.396217 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:33:41.396234 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:33:41.396251 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:33:41.396267 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:33:41.396283 | orchestrator | 2026-01-05 00:33:41.396300 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-01-05 00:33:41.396317 | orchestrator | Monday 05 January 2026 00:33:00 +0000 (0:00:01.631) 0:05:48.253 ******** 2026-01-05 00:33:41.396334 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:33:41.396362 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:33:41.396378 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:33:41.396394 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:33:41.396410 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:33:41.396427 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:33:41.396443 | orchestrator | changed: [testbed-manager] 2026-01-05 00:33:41.396457 | orchestrator | 2026-01-05 00:33:41.396467 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-01-05 00:33:41.396477 | orchestrator | Monday 05 January 2026 00:33:01 +0000 (0:00:00.660) 0:05:48.913 ******** 2026-01-05 00:33:41.396486 | orchestrator | ok: [testbed-manager] 2026-01-05 00:33:41.396496 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:33:41.396506 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:33:41.396515 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:33:41.396524 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:33:41.396534 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:33:41.396543 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:33:41.396553 | orchestrator | 2026-01-05 00:33:41.396562 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-01-05 00:33:41.396594 | orchestrator | Monday 05 January 2026 00:33:11 +0000 (0:00:10.095) 0:05:59.009 ******** 2026-01-05 00:33:41.396604 | orchestrator | changed: [testbed-manager] 2026-01-05 00:33:41.396614 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:33:41.396624 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:33:41.396633 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:33:41.396643 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:33:41.396652 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:33:41.396662 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:33:41.396672 | orchestrator | 2026-01-05 00:33:41.396681 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-01-05 00:33:41.396691 | orchestrator | Monday 05 January 2026 00:33:12 +0000 (0:00:00.964) 0:05:59.973 ******** 2026-01-05 00:33:41.396701 | orchestrator | ok: [testbed-manager] 2026-01-05 00:33:41.396711 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:33:41.396720 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:33:41.396730 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:33:41.396739 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:33:41.396749 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:33:41.396759 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:33:41.396768 | orchestrator | 2026-01-05 00:33:41.396778 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-01-05 00:33:41.396788 | orchestrator | Monday 05 January 2026 00:33:22 +0000 (0:00:10.245) 0:06:10.218 ******** 2026-01-05 00:33:41.396797 | orchestrator | ok: [testbed-manager] 2026-01-05 00:33:41.396807 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:33:41.396816 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:33:41.396826 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:33:41.396836 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:33:41.396845 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:33:41.396855 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:33:41.396864 | orchestrator | 2026-01-05 00:33:41.396899 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-01-05 00:33:41.396916 | orchestrator | Monday 05 January 2026 00:33:34 +0000 (0:00:12.195) 0:06:22.414 ******** 2026-01-05 00:33:41.396932 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-01-05 00:33:41.396949 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-01-05 00:33:41.396959 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-01-05 00:33:41.396969 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-01-05 00:33:41.396978 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-01-05 00:33:41.396988 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-01-05 00:33:41.396997 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-01-05 00:33:41.397015 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-01-05 00:33:41.397025 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-01-05 00:33:41.397034 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-01-05 00:33:41.397044 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-01-05 00:33:41.397054 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-01-05 00:33:41.397114 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-01-05 00:33:41.397125 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-01-05 00:33:41.397135 | orchestrator | 2026-01-05 00:33:41.397145 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-01-05 00:33:41.397155 | orchestrator | Monday 05 January 2026 00:33:35 +0000 (0:00:01.244) 0:06:23.659 ******** 2026-01-05 00:33:41.397165 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:33:41.397175 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:33:41.397184 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:33:41.397194 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:33:41.397204 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:33:41.397213 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:33:41.397223 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:33:41.397233 | orchestrator | 2026-01-05 00:33:41.397242 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-01-05 00:33:41.397252 | orchestrator | Monday 05 January 2026 00:33:36 +0000 (0:00:00.556) 0:06:24.216 ******** 2026-01-05 00:33:41.397267 | orchestrator | ok: [testbed-manager] 2026-01-05 00:33:41.397277 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:33:41.397287 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:33:41.397296 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:33:41.397306 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:33:41.397315 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:33:41.397325 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:33:41.397335 | orchestrator | 2026-01-05 00:33:41.397345 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-01-05 00:33:41.397356 | orchestrator | Monday 05 January 2026 00:33:40 +0000 (0:00:03.962) 0:06:28.179 ******** 2026-01-05 00:33:41.397366 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:33:41.397376 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:33:41.397385 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:33:41.397395 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:33:41.397404 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:33:41.397414 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:33:41.397423 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:33:41.397433 | orchestrator | 2026-01-05 00:33:41.397444 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-01-05 00:33:41.397454 | orchestrator | Monday 05 January 2026 00:33:40 +0000 (0:00:00.522) 0:06:28.701 ******** 2026-01-05 00:33:41.397464 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-01-05 00:33:41.397473 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-01-05 00:33:41.397483 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:33:41.397493 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-01-05 00:33:41.397503 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-01-05 00:33:41.397512 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:33:41.397522 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-01-05 00:33:41.397532 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-01-05 00:33:41.397541 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:33:41.397558 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-01-05 00:34:02.536167 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-01-05 00:34:02.536293 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:34:02.536312 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-01-05 00:34:02.536351 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-01-05 00:34:02.536364 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:34:02.536375 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-01-05 00:34:02.536386 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-01-05 00:34:02.536397 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:34:02.536408 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-01-05 00:34:02.536419 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-01-05 00:34:02.536430 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:34:02.536441 | orchestrator | 2026-01-05 00:34:02.536453 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-01-05 00:34:02.536465 | orchestrator | Monday 05 January 2026 00:33:41 +0000 (0:00:00.777) 0:06:29.478 ******** 2026-01-05 00:34:02.536477 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:34:02.536488 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:34:02.536498 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:34:02.536509 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:34:02.536520 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:34:02.536531 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:34:02.536542 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:34:02.536553 | orchestrator | 2026-01-05 00:34:02.536564 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-01-05 00:34:02.536575 | orchestrator | Monday 05 January 2026 00:33:42 +0000 (0:00:00.561) 0:06:30.040 ******** 2026-01-05 00:34:02.536586 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:34:02.536597 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:34:02.536607 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:34:02.536618 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:34:02.536629 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:34:02.536640 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:34:02.536651 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:34:02.536661 | orchestrator | 2026-01-05 00:34:02.536675 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-01-05 00:34:02.536688 | orchestrator | Monday 05 January 2026 00:33:42 +0000 (0:00:00.545) 0:06:30.585 ******** 2026-01-05 00:34:02.536700 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:34:02.536713 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:34:02.536725 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:34:02.536739 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:34:02.536751 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:34:02.536763 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:34:02.536776 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:34:02.536789 | orchestrator | 2026-01-05 00:34:02.536802 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-01-05 00:34:02.536815 | orchestrator | Monday 05 January 2026 00:33:43 +0000 (0:00:00.549) 0:06:31.135 ******** 2026-01-05 00:34:02.536827 | orchestrator | ok: [testbed-manager] 2026-01-05 00:34:02.536864 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:34:02.536877 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:34:02.536890 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:34:02.536902 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:34:02.536915 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:34:02.536928 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:34:02.536940 | orchestrator | 2026-01-05 00:34:02.536953 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-01-05 00:34:02.536966 | orchestrator | Monday 05 January 2026 00:33:45 +0000 (0:00:02.062) 0:06:33.198 ******** 2026-01-05 00:34:02.536981 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:34:02.537015 | orchestrator | 2026-01-05 00:34:02.537028 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-01-05 00:34:02.537041 | orchestrator | Monday 05 January 2026 00:33:46 +0000 (0:00:00.918) 0:06:34.116 ******** 2026-01-05 00:34:02.537052 | orchestrator | ok: [testbed-manager] 2026-01-05 00:34:02.537063 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:34:02.537074 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:34:02.537085 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:34:02.537096 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:34:02.537107 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:34:02.537118 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:34:02.537129 | orchestrator | 2026-01-05 00:34:02.537140 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-01-05 00:34:02.537151 | orchestrator | Monday 05 January 2026 00:33:47 +0000 (0:00:00.889) 0:06:35.006 ******** 2026-01-05 00:34:02.537161 | orchestrator | ok: [testbed-manager] 2026-01-05 00:34:02.537172 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:34:02.537183 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:34:02.537194 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:34:02.537205 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:34:02.537215 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:34:02.537226 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:34:02.537237 | orchestrator | 2026-01-05 00:34:02.537248 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-01-05 00:34:02.537259 | orchestrator | Monday 05 January 2026 00:33:48 +0000 (0:00:00.884) 0:06:35.890 ******** 2026-01-05 00:34:02.537270 | orchestrator | ok: [testbed-manager] 2026-01-05 00:34:02.537281 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:34:02.537292 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:34:02.537303 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:34:02.537313 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:34:02.537324 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:34:02.537335 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:34:02.537346 | orchestrator | 2026-01-05 00:34:02.537357 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-01-05 00:34:02.537387 | orchestrator | Monday 05 January 2026 00:33:49 +0000 (0:00:01.602) 0:06:37.493 ******** 2026-01-05 00:34:02.537399 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:34:02.537410 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:34:02.537421 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:34:02.537433 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:34:02.537444 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:34:02.537455 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:34:02.537465 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:34:02.537476 | orchestrator | 2026-01-05 00:34:02.537487 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-01-05 00:34:02.537498 | orchestrator | Monday 05 January 2026 00:33:51 +0000 (0:00:01.443) 0:06:38.937 ******** 2026-01-05 00:34:02.537509 | orchestrator | ok: [testbed-manager] 2026-01-05 00:34:02.537520 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:34:02.537531 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:34:02.537542 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:34:02.537553 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:34:02.537564 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:34:02.537574 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:34:02.537585 | orchestrator | 2026-01-05 00:34:02.537596 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-01-05 00:34:02.537607 | orchestrator | Monday 05 January 2026 00:33:52 +0000 (0:00:01.353) 0:06:40.290 ******** 2026-01-05 00:34:02.537618 | orchestrator | changed: [testbed-manager] 2026-01-05 00:34:02.537629 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:34:02.537640 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:34:02.537651 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:34:02.537662 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:34:02.537680 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:34:02.537691 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:34:02.537702 | orchestrator | 2026-01-05 00:34:02.537713 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-01-05 00:34:02.537724 | orchestrator | Monday 05 January 2026 00:33:53 +0000 (0:00:01.499) 0:06:41.789 ******** 2026-01-05 00:34:02.537735 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:34:02.537746 | orchestrator | 2026-01-05 00:34:02.537757 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-01-05 00:34:02.537768 | orchestrator | Monday 05 January 2026 00:33:55 +0000 (0:00:01.077) 0:06:42.867 ******** 2026-01-05 00:34:02.537779 | orchestrator | ok: [testbed-manager] 2026-01-05 00:34:02.537790 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:34:02.537801 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:34:02.537812 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:34:02.537823 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:34:02.537833 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:34:02.537862 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:34:02.537874 | orchestrator | 2026-01-05 00:34:02.537885 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-01-05 00:34:02.537896 | orchestrator | Monday 05 January 2026 00:33:56 +0000 (0:00:01.514) 0:06:44.381 ******** 2026-01-05 00:34:02.537907 | orchestrator | ok: [testbed-manager] 2026-01-05 00:34:02.537918 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:34:02.537929 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:34:02.537939 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:34:02.537950 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:34:02.537960 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:34:02.537971 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:34:02.537981 | orchestrator | 2026-01-05 00:34:02.537993 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-01-05 00:34:02.538003 | orchestrator | Monday 05 January 2026 00:33:57 +0000 (0:00:01.155) 0:06:45.537 ******** 2026-01-05 00:34:02.538060 | orchestrator | ok: [testbed-manager] 2026-01-05 00:34:02.538074 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:34:02.538085 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:34:02.538096 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:34:02.538107 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:34:02.538117 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:34:02.538145 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:34:02.538157 | orchestrator | 2026-01-05 00:34:02.538168 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-01-05 00:34:02.538179 | orchestrator | Monday 05 January 2026 00:33:59 +0000 (0:00:02.167) 0:06:47.705 ******** 2026-01-05 00:34:02.538190 | orchestrator | ok: [testbed-manager] 2026-01-05 00:34:02.538201 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:34:02.538212 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:34:02.538223 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:34:02.538233 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:34:02.538244 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:34:02.538255 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:34:02.538266 | orchestrator | 2026-01-05 00:34:02.538277 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-01-05 00:34:02.538288 | orchestrator | Monday 05 January 2026 00:34:01 +0000 (0:00:01.325) 0:06:49.030 ******** 2026-01-05 00:34:02.538299 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:34:02.538310 | orchestrator | 2026-01-05 00:34:02.538321 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-05 00:34:02.538332 | orchestrator | Monday 05 January 2026 00:34:02 +0000 (0:00:00.962) 0:06:49.992 ******** 2026-01-05 00:34:02.538351 | orchestrator | 2026-01-05 00:34:02.538362 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-05 00:34:02.538373 | orchestrator | Monday 05 January 2026 00:34:02 +0000 (0:00:00.041) 0:06:50.034 ******** 2026-01-05 00:34:02.538384 | orchestrator | 2026-01-05 00:34:02.538395 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-05 00:34:02.538407 | orchestrator | Monday 05 January 2026 00:34:02 +0000 (0:00:00.041) 0:06:50.076 ******** 2026-01-05 00:34:02.538418 | orchestrator | 2026-01-05 00:34:02.538429 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-05 00:34:02.538449 | orchestrator | Monday 05 January 2026 00:34:02 +0000 (0:00:00.049) 0:06:50.125 ******** 2026-01-05 00:34:29.666412 | orchestrator | 2026-01-05 00:34:29.666504 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-05 00:34:29.666513 | orchestrator | Monday 05 January 2026 00:34:02 +0000 (0:00:00.041) 0:06:50.167 ******** 2026-01-05 00:34:29.666518 | orchestrator | 2026-01-05 00:34:29.666523 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-05 00:34:29.666528 | orchestrator | Monday 05 January 2026 00:34:02 +0000 (0:00:00.041) 0:06:50.208 ******** 2026-01-05 00:34:29.666532 | orchestrator | 2026-01-05 00:34:29.666536 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-05 00:34:29.666541 | orchestrator | Monday 05 January 2026 00:34:02 +0000 (0:00:00.057) 0:06:50.266 ******** 2026-01-05 00:34:29.666545 | orchestrator | 2026-01-05 00:34:29.666549 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-05 00:34:29.666553 | orchestrator | Monday 05 January 2026 00:34:02 +0000 (0:00:00.052) 0:06:50.318 ******** 2026-01-05 00:34:29.666557 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:34:29.666562 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:34:29.666566 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:34:29.666570 | orchestrator | 2026-01-05 00:34:29.666574 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-01-05 00:34:29.666578 | orchestrator | Monday 05 January 2026 00:34:03 +0000 (0:00:01.215) 0:06:51.534 ******** 2026-01-05 00:34:29.666582 | orchestrator | changed: [testbed-manager] 2026-01-05 00:34:29.666587 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:34:29.666591 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:34:29.666595 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:34:29.666599 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:34:29.666603 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:34:29.666607 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:34:29.666611 | orchestrator | 2026-01-05 00:34:29.666615 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-01-05 00:34:29.666619 | orchestrator | Monday 05 January 2026 00:34:05 +0000 (0:00:01.342) 0:06:52.877 ******** 2026-01-05 00:34:29.666623 | orchestrator | changed: [testbed-manager] 2026-01-05 00:34:29.666627 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:34:29.666631 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:34:29.666635 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:34:29.666639 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:34:29.666642 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:34:29.666646 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:34:29.666650 | orchestrator | 2026-01-05 00:34:29.666654 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-01-05 00:34:29.666659 | orchestrator | Monday 05 January 2026 00:34:06 +0000 (0:00:01.425) 0:06:54.303 ******** 2026-01-05 00:34:29.666663 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:34:29.666667 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:34:29.666671 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:34:29.666674 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:34:29.666678 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:34:29.666682 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:34:29.666686 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:34:29.666706 | orchestrator | 2026-01-05 00:34:29.666710 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-01-05 00:34:29.666715 | orchestrator | Monday 05 January 2026 00:34:08 +0000 (0:00:02.466) 0:06:56.769 ******** 2026-01-05 00:34:29.666719 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:34:29.666723 | orchestrator | 2026-01-05 00:34:29.666727 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-01-05 00:34:29.666731 | orchestrator | Monday 05 January 2026 00:34:09 +0000 (0:00:00.124) 0:06:56.893 ******** 2026-01-05 00:34:29.666735 | orchestrator | ok: [testbed-manager] 2026-01-05 00:34:29.666739 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:34:29.666743 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:34:29.666747 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:34:29.666750 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:34:29.666754 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:34:29.666758 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:34:29.666775 | orchestrator | 2026-01-05 00:34:29.666779 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-01-05 00:34:29.666784 | orchestrator | Monday 05 January 2026 00:34:10 +0000 (0:00:01.023) 0:06:57.917 ******** 2026-01-05 00:34:29.666788 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:34:29.666792 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:34:29.666796 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:34:29.666800 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:34:29.666803 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:34:29.666807 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:34:29.666811 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:34:29.666815 | orchestrator | 2026-01-05 00:34:29.666819 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-01-05 00:34:29.666855 | orchestrator | Monday 05 January 2026 00:34:10 +0000 (0:00:00.558) 0:06:58.475 ******** 2026-01-05 00:34:29.666861 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:34:29.666867 | orchestrator | 2026-01-05 00:34:29.666871 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-01-05 00:34:29.666875 | orchestrator | Monday 05 January 2026 00:34:11 +0000 (0:00:01.107) 0:06:59.582 ******** 2026-01-05 00:34:29.666879 | orchestrator | ok: [testbed-manager] 2026-01-05 00:34:29.666883 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:34:29.666887 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:34:29.666891 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:34:29.666895 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:34:29.666899 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:34:29.666903 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:34:29.666907 | orchestrator | 2026-01-05 00:34:29.666911 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-01-05 00:34:29.666915 | orchestrator | Monday 05 January 2026 00:34:12 +0000 (0:00:00.928) 0:07:00.511 ******** 2026-01-05 00:34:29.666919 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-01-05 00:34:29.666934 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-01-05 00:34:29.666939 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-01-05 00:34:29.666943 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-01-05 00:34:29.666947 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-01-05 00:34:29.666951 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-01-05 00:34:29.666955 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-01-05 00:34:29.666959 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-01-05 00:34:29.666963 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-01-05 00:34:29.666967 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-01-05 00:34:29.666990 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-01-05 00:34:29.666994 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-01-05 00:34:29.666998 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-01-05 00:34:29.667002 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-01-05 00:34:29.667006 | orchestrator | 2026-01-05 00:34:29.667010 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-01-05 00:34:29.667014 | orchestrator | Monday 05 January 2026 00:34:15 +0000 (0:00:02.511) 0:07:03.023 ******** 2026-01-05 00:34:29.667018 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:34:29.667022 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:34:29.667026 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:34:29.667029 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:34:29.667033 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:34:29.667037 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:34:29.667041 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:34:29.667045 | orchestrator | 2026-01-05 00:34:29.667049 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-01-05 00:34:29.667053 | orchestrator | Monday 05 January 2026 00:34:16 +0000 (0:00:00.783) 0:07:03.806 ******** 2026-01-05 00:34:29.667059 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:34:29.667065 | orchestrator | 2026-01-05 00:34:29.667069 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-01-05 00:34:29.667072 | orchestrator | Monday 05 January 2026 00:34:16 +0000 (0:00:00.855) 0:07:04.662 ******** 2026-01-05 00:34:29.667076 | orchestrator | ok: [testbed-manager] 2026-01-05 00:34:29.667080 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:34:29.667084 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:34:29.667088 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:34:29.667092 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:34:29.667096 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:34:29.667100 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:34:29.667104 | orchestrator | 2026-01-05 00:34:29.667108 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-01-05 00:34:29.667112 | orchestrator | Monday 05 January 2026 00:34:17 +0000 (0:00:00.871) 0:07:05.533 ******** 2026-01-05 00:34:29.667116 | orchestrator | ok: [testbed-manager] 2026-01-05 00:34:29.667120 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:34:29.667124 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:34:29.667128 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:34:29.667132 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:34:29.667136 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:34:29.667139 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:34:29.667143 | orchestrator | 2026-01-05 00:34:29.667147 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-01-05 00:34:29.667151 | orchestrator | Monday 05 January 2026 00:34:18 +0000 (0:00:01.028) 0:07:06.562 ******** 2026-01-05 00:34:29.667155 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:34:29.667163 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:34:29.667167 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:34:29.667171 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:34:29.667175 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:34:29.667179 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:34:29.667183 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:34:29.667186 | orchestrator | 2026-01-05 00:34:29.667190 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-01-05 00:34:29.667194 | orchestrator | Monday 05 January 2026 00:34:19 +0000 (0:00:00.531) 0:07:07.093 ******** 2026-01-05 00:34:29.667198 | orchestrator | ok: [testbed-manager] 2026-01-05 00:34:29.667202 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:34:29.667210 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:34:29.667214 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:34:29.667218 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:34:29.667222 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:34:29.667226 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:34:29.667230 | orchestrator | 2026-01-05 00:34:29.667234 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-01-05 00:34:29.667238 | orchestrator | Monday 05 January 2026 00:34:20 +0000 (0:00:01.512) 0:07:08.606 ******** 2026-01-05 00:34:29.667241 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:34:29.667245 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:34:29.667249 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:34:29.667254 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:34:29.667257 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:34:29.667261 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:34:29.667265 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:34:29.667269 | orchestrator | 2026-01-05 00:34:29.667273 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-01-05 00:34:29.667277 | orchestrator | Monday 05 January 2026 00:34:21 +0000 (0:00:00.577) 0:07:09.184 ******** 2026-01-05 00:34:29.667281 | orchestrator | ok: [testbed-manager] 2026-01-05 00:34:29.667285 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:34:29.667289 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:34:29.667293 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:34:29.667297 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:34:29.667301 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:34:29.667308 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:35:03.595217 | orchestrator | 2026-01-05 00:35:03.595337 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-01-05 00:35:03.595364 | orchestrator | Monday 05 January 2026 00:34:29 +0000 (0:00:08.268) 0:07:17.453 ******** 2026-01-05 00:35:03.595385 | orchestrator | ok: [testbed-manager] 2026-01-05 00:35:03.595405 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:35:03.595426 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:35:03.595445 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:35:03.595466 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:35:03.595487 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:35:03.595507 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:35:03.595526 | orchestrator | 2026-01-05 00:35:03.595544 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-01-05 00:35:03.595556 | orchestrator | Monday 05 January 2026 00:34:31 +0000 (0:00:01.662) 0:07:19.116 ******** 2026-01-05 00:35:03.595567 | orchestrator | ok: [testbed-manager] 2026-01-05 00:35:03.595579 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:35:03.595590 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:35:03.595601 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:35:03.595613 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:35:03.595624 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:35:03.595635 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:35:03.595646 | orchestrator | 2026-01-05 00:35:03.595657 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-01-05 00:35:03.595668 | orchestrator | Monday 05 January 2026 00:34:33 +0000 (0:00:01.760) 0:07:20.876 ******** 2026-01-05 00:35:03.595679 | orchestrator | ok: [testbed-manager] 2026-01-05 00:35:03.595690 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:35:03.595701 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:35:03.595712 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:35:03.595723 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:35:03.595736 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:35:03.595749 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:35:03.595761 | orchestrator | 2026-01-05 00:35:03.595774 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-05 00:35:03.595787 | orchestrator | Monday 05 January 2026 00:34:34 +0000 (0:00:01.695) 0:07:22.571 ******** 2026-01-05 00:35:03.595860 | orchestrator | ok: [testbed-manager] 2026-01-05 00:35:03.595875 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:35:03.595888 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:35:03.595901 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:35:03.595914 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:35:03.595927 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:35:03.595939 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:35:03.595950 | orchestrator | 2026-01-05 00:35:03.595961 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-05 00:35:03.595972 | orchestrator | Monday 05 January 2026 00:34:35 +0000 (0:00:00.848) 0:07:23.419 ******** 2026-01-05 00:35:03.595983 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:35:03.595994 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:35:03.596005 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:35:03.596016 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:35:03.596026 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:35:03.596037 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:35:03.596048 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:35:03.596058 | orchestrator | 2026-01-05 00:35:03.596069 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-01-05 00:35:03.596080 | orchestrator | Monday 05 January 2026 00:34:36 +0000 (0:00:01.079) 0:07:24.499 ******** 2026-01-05 00:35:03.596091 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:35:03.596102 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:35:03.596112 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:35:03.596123 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:35:03.596134 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:35:03.596144 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:35:03.596155 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:35:03.596166 | orchestrator | 2026-01-05 00:35:03.596177 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-01-05 00:35:03.596188 | orchestrator | Monday 05 January 2026 00:34:37 +0000 (0:00:00.570) 0:07:25.070 ******** 2026-01-05 00:35:03.596199 | orchestrator | ok: [testbed-manager] 2026-01-05 00:35:03.596209 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:35:03.596220 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:35:03.596231 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:35:03.596268 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:35:03.596287 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:35:03.596306 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:35:03.596324 | orchestrator | 2026-01-05 00:35:03.596342 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-01-05 00:35:03.596360 | orchestrator | Monday 05 January 2026 00:34:37 +0000 (0:00:00.582) 0:07:25.652 ******** 2026-01-05 00:35:03.596379 | orchestrator | ok: [testbed-manager] 2026-01-05 00:35:03.596398 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:35:03.596417 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:35:03.596437 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:35:03.596455 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:35:03.596474 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:35:03.596492 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:35:03.596511 | orchestrator | 2026-01-05 00:35:03.596530 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-01-05 00:35:03.596549 | orchestrator | Monday 05 January 2026 00:34:38 +0000 (0:00:00.592) 0:07:26.245 ******** 2026-01-05 00:35:03.596569 | orchestrator | ok: [testbed-manager] 2026-01-05 00:35:03.596587 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:35:03.596607 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:35:03.596625 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:35:03.596643 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:35:03.596662 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:35:03.596681 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:35:03.596700 | orchestrator | 2026-01-05 00:35:03.596720 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-01-05 00:35:03.596738 | orchestrator | Monday 05 January 2026 00:34:39 +0000 (0:00:00.740) 0:07:26.985 ******** 2026-01-05 00:35:03.596773 | orchestrator | ok: [testbed-manager] 2026-01-05 00:35:03.596791 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:35:03.596838 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:35:03.596858 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:35:03.596879 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:35:03.596898 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:35:03.596917 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:35:03.596936 | orchestrator | 2026-01-05 00:35:03.596977 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-01-05 00:35:03.596991 | orchestrator | Monday 05 January 2026 00:34:45 +0000 (0:00:05.850) 0:07:32.836 ******** 2026-01-05 00:35:03.597002 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:35:03.597013 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:35:03.597024 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:35:03.597036 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:35:03.597047 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:35:03.597058 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:35:03.597068 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:35:03.597079 | orchestrator | 2026-01-05 00:35:03.597091 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-01-05 00:35:03.597102 | orchestrator | Monday 05 January 2026 00:34:45 +0000 (0:00:00.572) 0:07:33.408 ******** 2026-01-05 00:35:03.597114 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:35:03.597128 | orchestrator | 2026-01-05 00:35:03.597140 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-01-05 00:35:03.597151 | orchestrator | Monday 05 January 2026 00:34:46 +0000 (0:00:01.028) 0:07:34.437 ******** 2026-01-05 00:35:03.597162 | orchestrator | ok: [testbed-manager] 2026-01-05 00:35:03.597173 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:35:03.597183 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:35:03.597194 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:35:03.597205 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:35:03.597216 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:35:03.597227 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:35:03.597238 | orchestrator | 2026-01-05 00:35:03.597249 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-01-05 00:35:03.597260 | orchestrator | Monday 05 January 2026 00:34:48 +0000 (0:00:01.932) 0:07:36.369 ******** 2026-01-05 00:35:03.597271 | orchestrator | ok: [testbed-manager] 2026-01-05 00:35:03.597282 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:35:03.597293 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:35:03.597304 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:35:03.597314 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:35:03.597325 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:35:03.597336 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:35:03.597347 | orchestrator | 2026-01-05 00:35:03.597358 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-01-05 00:35:03.597369 | orchestrator | Monday 05 January 2026 00:34:49 +0000 (0:00:01.164) 0:07:37.534 ******** 2026-01-05 00:35:03.597379 | orchestrator | ok: [testbed-manager] 2026-01-05 00:35:03.597390 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:35:03.597401 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:35:03.597412 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:35:03.597423 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:35:03.597434 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:35:03.597444 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:35:03.597455 | orchestrator | 2026-01-05 00:35:03.597467 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-01-05 00:35:03.597477 | orchestrator | Monday 05 January 2026 00:34:50 +0000 (0:00:00.876) 0:07:38.411 ******** 2026-01-05 00:35:03.597489 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-05 00:35:03.597511 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-05 00:35:03.597522 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-05 00:35:03.597541 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-05 00:35:03.597552 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-05 00:35:03.597563 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-05 00:35:03.597574 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-05 00:35:03.597585 | orchestrator | 2026-01-05 00:35:03.597596 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-01-05 00:35:03.597607 | orchestrator | Monday 05 January 2026 00:34:52 +0000 (0:00:01.897) 0:07:40.309 ******** 2026-01-05 00:35:03.597619 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:35:03.597630 | orchestrator | 2026-01-05 00:35:03.597641 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-01-05 00:35:03.597652 | orchestrator | Monday 05 January 2026 00:34:53 +0000 (0:00:00.833) 0:07:41.142 ******** 2026-01-05 00:35:03.597663 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:35:03.597675 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:35:03.597685 | orchestrator | changed: [testbed-manager] 2026-01-05 00:35:03.597697 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:35:03.597708 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:35:03.597719 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:35:03.597729 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:35:03.597740 | orchestrator | 2026-01-05 00:35:03.597759 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-01-05 00:35:35.550880 | orchestrator | Monday 05 January 2026 00:35:03 +0000 (0:00:10.238) 0:07:51.380 ******** 2026-01-05 00:35:35.551043 | orchestrator | ok: [testbed-manager] 2026-01-05 00:35:35.551070 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:35:35.551091 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:35:35.551110 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:35:35.551129 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:35:35.551149 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:35:35.551169 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:35:35.551188 | orchestrator | 2026-01-05 00:35:35.551208 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-01-05 00:35:35.551228 | orchestrator | Monday 05 January 2026 00:35:05 +0000 (0:00:02.061) 0:07:53.442 ******** 2026-01-05 00:35:35.551247 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:35:35.551267 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:35:35.551287 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:35:35.551306 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:35:35.551325 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:35:35.551345 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:35:35.551364 | orchestrator | 2026-01-05 00:35:35.551384 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-01-05 00:35:35.551402 | orchestrator | Monday 05 January 2026 00:35:07 +0000 (0:00:01.361) 0:07:54.803 ******** 2026-01-05 00:35:35.551420 | orchestrator | changed: [testbed-manager] 2026-01-05 00:35:35.551439 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:35:35.551494 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:35:35.551512 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:35:35.551528 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:35:35.551543 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:35:35.551560 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:35:35.551577 | orchestrator | 2026-01-05 00:35:35.551595 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-01-05 00:35:35.551611 | orchestrator | 2026-01-05 00:35:35.551629 | orchestrator | TASK [Include hardening role] ************************************************** 2026-01-05 00:35:35.551646 | orchestrator | Monday 05 January 2026 00:35:08 +0000 (0:00:01.256) 0:07:56.060 ******** 2026-01-05 00:35:35.551663 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:35:35.551680 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:35:35.551697 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:35:35.551716 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:35:35.551734 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:35:35.551752 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:35:35.551770 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:35:35.551819 | orchestrator | 2026-01-05 00:35:35.551839 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-01-05 00:35:35.551856 | orchestrator | 2026-01-05 00:35:35.551871 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-01-05 00:35:35.551887 | orchestrator | Monday 05 January 2026 00:35:08 +0000 (0:00:00.740) 0:07:56.801 ******** 2026-01-05 00:35:35.551902 | orchestrator | changed: [testbed-manager] 2026-01-05 00:35:35.551918 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:35:35.551934 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:35:35.551950 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:35:35.551967 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:35:35.551984 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:35:35.552000 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:35:35.552011 | orchestrator | 2026-01-05 00:35:35.552021 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-01-05 00:35:35.552031 | orchestrator | Monday 05 January 2026 00:35:10 +0000 (0:00:01.336) 0:07:58.138 ******** 2026-01-05 00:35:35.552041 | orchestrator | ok: [testbed-manager] 2026-01-05 00:35:35.552051 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:35:35.552063 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:35:35.552080 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:35:35.552104 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:35:35.552121 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:35:35.552136 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:35:35.552151 | orchestrator | 2026-01-05 00:35:35.552166 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-01-05 00:35:35.552203 | orchestrator | Monday 05 January 2026 00:35:11 +0000 (0:00:01.512) 0:07:59.650 ******** 2026-01-05 00:35:35.552220 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:35:35.552237 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:35:35.552252 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:35:35.552268 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:35:35.552285 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:35:35.552301 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:35:35.552318 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:35:35.552333 | orchestrator | 2026-01-05 00:35:35.552344 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-01-05 00:35:35.552353 | orchestrator | Monday 05 January 2026 00:35:12 +0000 (0:00:00.513) 0:08:00.163 ******** 2026-01-05 00:35:35.552364 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:35:35.552376 | orchestrator | 2026-01-05 00:35:35.552385 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-01-05 00:35:35.552395 | orchestrator | Monday 05 January 2026 00:35:13 +0000 (0:00:01.077) 0:08:01.240 ******** 2026-01-05 00:35:35.552422 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:35:35.552435 | orchestrator | 2026-01-05 00:35:35.552445 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-01-05 00:35:35.552454 | orchestrator | Monday 05 January 2026 00:35:14 +0000 (0:00:00.854) 0:08:02.095 ******** 2026-01-05 00:35:35.552464 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:35:35.552474 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:35:35.552483 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:35:35.552493 | orchestrator | changed: [testbed-manager] 2026-01-05 00:35:35.552502 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:35:35.552514 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:35:35.552525 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:35:35.552542 | orchestrator | 2026-01-05 00:35:35.552595 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-01-05 00:35:35.552618 | orchestrator | Monday 05 January 2026 00:35:23 +0000 (0:00:08.980) 0:08:11.075 ******** 2026-01-05 00:35:35.552636 | orchestrator | changed: [testbed-manager] 2026-01-05 00:35:35.552652 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:35:35.552667 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:35:35.552678 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:35:35.552689 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:35:35.552700 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:35:35.552711 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:35:35.552722 | orchestrator | 2026-01-05 00:35:35.552733 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-01-05 00:35:35.552744 | orchestrator | Monday 05 January 2026 00:35:24 +0000 (0:00:01.166) 0:08:12.241 ******** 2026-01-05 00:35:35.552755 | orchestrator | changed: [testbed-manager] 2026-01-05 00:35:35.552765 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:35:35.552776 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:35:35.552787 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:35:35.552856 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:35:35.552867 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:35:35.552878 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:35:35.552889 | orchestrator | 2026-01-05 00:35:35.552900 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-01-05 00:35:35.552910 | orchestrator | Monday 05 January 2026 00:35:25 +0000 (0:00:01.432) 0:08:13.674 ******** 2026-01-05 00:35:35.552920 | orchestrator | changed: [testbed-manager] 2026-01-05 00:35:35.552929 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:35:35.552939 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:35:35.552948 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:35:35.552957 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:35:35.552967 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:35:35.552976 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:35:35.552986 | orchestrator | 2026-01-05 00:35:35.552995 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-01-05 00:35:35.553005 | orchestrator | Monday 05 January 2026 00:35:27 +0000 (0:00:02.031) 0:08:15.705 ******** 2026-01-05 00:35:35.553015 | orchestrator | changed: [testbed-manager] 2026-01-05 00:35:35.553024 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:35:35.553033 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:35:35.553043 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:35:35.553053 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:35:35.553062 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:35:35.553072 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:35:35.553081 | orchestrator | 2026-01-05 00:35:35.553091 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-01-05 00:35:35.553100 | orchestrator | Monday 05 January 2026 00:35:29 +0000 (0:00:01.256) 0:08:16.962 ******** 2026-01-05 00:35:35.553120 | orchestrator | changed: [testbed-manager] 2026-01-05 00:35:35.553129 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:35:35.553139 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:35:35.553148 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:35:35.553158 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:35:35.553167 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:35:35.553177 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:35:35.553186 | orchestrator | 2026-01-05 00:35:35.553196 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-01-05 00:35:35.553205 | orchestrator | 2026-01-05 00:35:35.553215 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-01-05 00:35:35.553225 | orchestrator | Monday 05 January 2026 00:35:30 +0000 (0:00:01.138) 0:08:18.100 ******** 2026-01-05 00:35:35.553234 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:35:35.553244 | orchestrator | 2026-01-05 00:35:35.553254 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-01-05 00:35:35.553271 | orchestrator | Monday 05 January 2026 00:35:31 +0000 (0:00:00.859) 0:08:18.960 ******** 2026-01-05 00:35:35.553281 | orchestrator | ok: [testbed-manager] 2026-01-05 00:35:35.553346 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:35:35.553356 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:35:35.553365 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:35:35.553375 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:35:35.553385 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:35:35.553394 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:35:35.553403 | orchestrator | 2026-01-05 00:35:35.553413 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-01-05 00:35:35.553423 | orchestrator | Monday 05 January 2026 00:35:32 +0000 (0:00:01.066) 0:08:20.026 ******** 2026-01-05 00:35:35.553433 | orchestrator | changed: [testbed-manager] 2026-01-05 00:35:35.553443 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:35:35.553452 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:35:35.553462 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:35:35.553471 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:35:35.553481 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:35:35.553490 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:35:35.553500 | orchestrator | 2026-01-05 00:35:35.553509 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-01-05 00:35:35.553519 | orchestrator | Monday 05 January 2026 00:35:33 +0000 (0:00:01.384) 0:08:21.411 ******** 2026-01-05 00:35:35.553529 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:35:35.553538 | orchestrator | 2026-01-05 00:35:35.553548 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-01-05 00:35:35.553558 | orchestrator | Monday 05 January 2026 00:35:34 +0000 (0:00:01.083) 0:08:22.494 ******** 2026-01-05 00:35:35.553567 | orchestrator | ok: [testbed-manager] 2026-01-05 00:35:35.553577 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:35:35.553586 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:35:35.553596 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:35:35.553605 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:35:35.553615 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:35:35.553624 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:35:35.553633 | orchestrator | 2026-01-05 00:35:35.553652 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-01-05 00:35:37.189508 | orchestrator | Monday 05 January 2026 00:35:35 +0000 (0:00:00.843) 0:08:23.337 ******** 2026-01-05 00:35:37.189641 | orchestrator | changed: [testbed-manager] 2026-01-05 00:35:37.189658 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:35:37.189669 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:35:37.189679 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:35:37.189718 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:35:37.189729 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:35:37.189738 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:35:37.189748 | orchestrator | 2026-01-05 00:35:37.189761 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:35:37.189780 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-01-05 00:35:37.189834 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-05 00:35:37.189852 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-05 00:35:37.189868 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-05 00:35:37.189884 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-01-05 00:35:37.189895 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-01-05 00:35:37.189904 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-01-05 00:35:37.189914 | orchestrator | 2026-01-05 00:35:37.189924 | orchestrator | 2026-01-05 00:35:37.189934 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:35:37.189944 | orchestrator | Monday 05 January 2026 00:35:36 +0000 (0:00:01.106) 0:08:24.443 ******** 2026-01-05 00:35:37.189954 | orchestrator | =============================================================================== 2026-01-05 00:35:37.189964 | orchestrator | osism.commons.packages : Install required packages --------------------- 76.77s 2026-01-05 00:35:37.189974 | orchestrator | osism.commons.packages : Download required packages -------------------- 35.88s 2026-01-05 00:35:37.189983 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.89s 2026-01-05 00:35:37.189993 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.17s 2026-01-05 00:35:37.190003 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 13.88s 2026-01-05 00:35:37.190074 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 13.01s 2026-01-05 00:35:37.190089 | orchestrator | osism.services.docker : Install docker package ------------------------- 12.20s 2026-01-05 00:35:37.190101 | orchestrator | osism.services.docker : Install docker-cli package --------------------- 10.25s 2026-01-05 00:35:37.190112 | orchestrator | osism.services.lldpd : Install lldpd package --------------------------- 10.24s 2026-01-05 00:35:37.190124 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.10s 2026-01-05 00:35:37.190150 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.98s 2026-01-05 00:35:37.190161 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.79s 2026-01-05 00:35:37.190173 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.73s 2026-01-05 00:35:37.190183 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 8.27s 2026-01-05 00:35:37.190195 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.10s 2026-01-05 00:35:37.190205 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.58s 2026-01-05 00:35:37.190216 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.61s 2026-01-05 00:35:37.190228 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.45s 2026-01-05 00:35:37.190239 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 6.04s 2026-01-05 00:35:37.190260 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.85s 2026-01-05 00:35:37.521918 | orchestrator | + osism apply fail2ban 2026-01-05 00:35:50.590300 | orchestrator | 2026-01-05 00:35:50 | INFO  | Task f890e0b1-5f61-4283-b406-f8ea0d1010aa (fail2ban) was prepared for execution. 2026-01-05 00:35:50.590440 | orchestrator | 2026-01-05 00:35:50 | INFO  | It takes a moment until task f890e0b1-5f61-4283-b406-f8ea0d1010aa (fail2ban) has been started and output is visible here. 2026-01-05 00:36:13.912575 | orchestrator | 2026-01-05 00:36:13.912702 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-01-05 00:36:13.912718 | orchestrator | 2026-01-05 00:36:13.912731 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-01-05 00:36:13.912743 | orchestrator | Monday 05 January 2026 00:35:55 +0000 (0:00:00.291) 0:00:00.291 ******** 2026-01-05 00:36:13.912781 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:36:13.912795 | orchestrator | 2026-01-05 00:36:13.912807 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-01-05 00:36:13.912818 | orchestrator | Monday 05 January 2026 00:35:56 +0000 (0:00:01.252) 0:00:01.544 ******** 2026-01-05 00:36:13.912829 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:36:13.912842 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:36:13.912853 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:36:13.912864 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:36:13.912875 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:36:13.912886 | orchestrator | changed: [testbed-manager] 2026-01-05 00:36:13.912897 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:36:13.912908 | orchestrator | 2026-01-05 00:36:13.912919 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-01-05 00:36:13.912930 | orchestrator | Monday 05 January 2026 00:36:08 +0000 (0:00:11.801) 0:00:13.346 ******** 2026-01-05 00:36:13.912941 | orchestrator | changed: [testbed-manager] 2026-01-05 00:36:13.912952 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:36:13.912963 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:36:13.912974 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:36:13.912985 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:36:13.912995 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:36:13.913006 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:36:13.913017 | orchestrator | 2026-01-05 00:36:13.913028 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-01-05 00:36:13.913039 | orchestrator | Monday 05 January 2026 00:36:10 +0000 (0:00:01.550) 0:00:14.896 ******** 2026-01-05 00:36:13.913050 | orchestrator | ok: [testbed-manager] 2026-01-05 00:36:13.913063 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:36:13.913074 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:36:13.913085 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:36:13.913098 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:36:13.913110 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:36:13.913123 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:36:13.913135 | orchestrator | 2026-01-05 00:36:13.913148 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-01-05 00:36:13.913161 | orchestrator | Monday 05 January 2026 00:36:11 +0000 (0:00:01.497) 0:00:16.393 ******** 2026-01-05 00:36:13.913172 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:36:13.913183 | orchestrator | changed: [testbed-manager] 2026-01-05 00:36:13.913194 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:36:13.913206 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:36:13.913217 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:36:13.913228 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:36:13.913239 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:36:13.913281 | orchestrator | 2026-01-05 00:36:13.913292 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:36:13.913304 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:36:13.913316 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:36:13.913327 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:36:13.913338 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:36:13.913349 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:36:13.913360 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:36:13.913371 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:36:13.913381 | orchestrator | 2026-01-05 00:36:13.913392 | orchestrator | 2026-01-05 00:36:13.913403 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:36:13.913414 | orchestrator | Monday 05 January 2026 00:36:13 +0000 (0:00:01.757) 0:00:18.151 ******** 2026-01-05 00:36:13.913424 | orchestrator | =============================================================================== 2026-01-05 00:36:13.913435 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.80s 2026-01-05 00:36:13.913446 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.76s 2026-01-05 00:36:13.913457 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.55s 2026-01-05 00:36:13.913468 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.50s 2026-01-05 00:36:13.913479 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.25s 2026-01-05 00:36:14.255815 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-01-05 00:36:14.255926 | orchestrator | + osism apply network 2026-01-05 00:36:26.377042 | orchestrator | 2026-01-05 00:36:26 | INFO  | Task a4d2bac8-97bb-423d-98f9-a82f07b5a971 (network) was prepared for execution. 2026-01-05 00:36:26.377173 | orchestrator | 2026-01-05 00:36:26 | INFO  | It takes a moment until task a4d2bac8-97bb-423d-98f9-a82f07b5a971 (network) has been started and output is visible here. 2026-01-05 00:36:57.365778 | orchestrator | 2026-01-05 00:36:57.365879 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-01-05 00:36:57.365890 | orchestrator | 2026-01-05 00:36:57.365897 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-01-05 00:36:57.365904 | orchestrator | Monday 05 January 2026 00:36:30 +0000 (0:00:00.263) 0:00:00.263 ******** 2026-01-05 00:36:57.365911 | orchestrator | ok: [testbed-manager] 2026-01-05 00:36:57.365919 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:36:57.365926 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:36:57.365932 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:36:57.365938 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:36:57.365945 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:36:57.365951 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:36:57.365957 | orchestrator | 2026-01-05 00:36:57.365964 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-01-05 00:36:57.365970 | orchestrator | Monday 05 January 2026 00:36:31 +0000 (0:00:00.740) 0:00:01.003 ******** 2026-01-05 00:36:57.365978 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:36:57.366006 | orchestrator | 2026-01-05 00:36:57.366013 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-01-05 00:36:57.366069 | orchestrator | Monday 05 January 2026 00:36:32 +0000 (0:00:01.270) 0:00:02.274 ******** 2026-01-05 00:36:57.366075 | orchestrator | ok: [testbed-manager] 2026-01-05 00:36:57.366082 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:36:57.366088 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:36:57.366094 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:36:57.366100 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:36:57.366106 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:36:57.366113 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:36:57.366119 | orchestrator | 2026-01-05 00:36:57.366125 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-01-05 00:36:57.366133 | orchestrator | Monday 05 January 2026 00:36:35 +0000 (0:00:02.339) 0:00:04.613 ******** 2026-01-05 00:36:57.366139 | orchestrator | ok: [testbed-manager] 2026-01-05 00:36:57.366145 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:36:57.366151 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:36:57.366157 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:36:57.366163 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:36:57.366169 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:36:57.366175 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:36:57.366182 | orchestrator | 2026-01-05 00:36:57.366188 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-01-05 00:36:57.366194 | orchestrator | Monday 05 January 2026 00:36:37 +0000 (0:00:02.047) 0:00:06.660 ******** 2026-01-05 00:36:57.366201 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-01-05 00:36:57.366208 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-01-05 00:36:57.366214 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-01-05 00:36:57.366220 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-01-05 00:36:57.366226 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-01-05 00:36:57.366233 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-01-05 00:36:57.366239 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-01-05 00:36:57.366245 | orchestrator | 2026-01-05 00:36:57.366252 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-01-05 00:36:57.366277 | orchestrator | Monday 05 January 2026 00:36:38 +0000 (0:00:01.040) 0:00:07.701 ******** 2026-01-05 00:36:57.366285 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 00:36:57.366294 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-05 00:36:57.366301 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-05 00:36:57.366308 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-05 00:36:57.366316 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-05 00:36:57.366323 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-05 00:36:57.366330 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-05 00:36:57.366338 | orchestrator | 2026-01-05 00:36:57.366345 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-01-05 00:36:57.366356 | orchestrator | Monday 05 January 2026 00:36:41 +0000 (0:00:03.447) 0:00:11.148 ******** 2026-01-05 00:36:57.366363 | orchestrator | changed: [testbed-manager] 2026-01-05 00:36:57.366371 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:36:57.366378 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:36:57.366404 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:36:57.366411 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:36:57.366418 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:36:57.366425 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:36:57.366432 | orchestrator | 2026-01-05 00:36:57.366440 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-01-05 00:36:57.366447 | orchestrator | Monday 05 January 2026 00:36:43 +0000 (0:00:01.715) 0:00:12.864 ******** 2026-01-05 00:36:57.366454 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-05 00:36:57.366462 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-05 00:36:57.366476 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 00:36:57.366483 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-05 00:36:57.366490 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-05 00:36:57.366498 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-05 00:36:57.366504 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-05 00:36:57.366511 | orchestrator | 2026-01-05 00:36:57.366518 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-01-05 00:36:57.366525 | orchestrator | Monday 05 January 2026 00:36:45 +0000 (0:00:01.883) 0:00:14.747 ******** 2026-01-05 00:36:57.366532 | orchestrator | ok: [testbed-manager] 2026-01-05 00:36:57.366540 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:36:57.366550 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:36:57.366560 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:36:57.366571 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:36:57.366578 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:36:57.366586 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:36:57.366593 | orchestrator | 2026-01-05 00:36:57.366600 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-01-05 00:36:57.366623 | orchestrator | Monday 05 January 2026 00:36:46 +0000 (0:00:01.195) 0:00:15.943 ******** 2026-01-05 00:36:57.366630 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:36:57.366637 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:36:57.366643 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:36:57.366649 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:36:57.366655 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:36:57.366661 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:36:57.366667 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:36:57.366674 | orchestrator | 2026-01-05 00:36:57.366680 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-01-05 00:36:57.366686 | orchestrator | Monday 05 January 2026 00:36:47 +0000 (0:00:00.759) 0:00:16.702 ******** 2026-01-05 00:36:57.366692 | orchestrator | ok: [testbed-manager] 2026-01-05 00:36:57.366698 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:36:57.366704 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:36:57.366726 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:36:57.366733 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:36:57.366739 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:36:57.366745 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:36:57.366751 | orchestrator | 2026-01-05 00:36:57.366757 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-01-05 00:36:57.366764 | orchestrator | Monday 05 January 2026 00:36:50 +0000 (0:00:02.869) 0:00:19.572 ******** 2026-01-05 00:36:57.366770 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:36:57.366776 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:36:57.366782 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:36:57.366789 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:36:57.366795 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:36:57.366801 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:36:57.366808 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-01-05 00:36:57.366815 | orchestrator | 2026-01-05 00:36:57.366821 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-01-05 00:36:57.366828 | orchestrator | Monday 05 January 2026 00:36:50 +0000 (0:00:00.930) 0:00:20.503 ******** 2026-01-05 00:36:57.366834 | orchestrator | ok: [testbed-manager] 2026-01-05 00:36:57.366840 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:36:57.366846 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:36:57.366852 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:36:57.366858 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:36:57.366864 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:36:57.366870 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:36:57.366877 | orchestrator | 2026-01-05 00:36:57.366883 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-01-05 00:36:57.366895 | orchestrator | Monday 05 January 2026 00:36:52 +0000 (0:00:01.743) 0:00:22.246 ******** 2026-01-05 00:36:57.366902 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:36:57.366910 | orchestrator | 2026-01-05 00:36:57.366916 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-01-05 00:36:57.366922 | orchestrator | Monday 05 January 2026 00:36:54 +0000 (0:00:01.317) 0:00:23.564 ******** 2026-01-05 00:36:57.366929 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:36:57.366935 | orchestrator | ok: [testbed-manager] 2026-01-05 00:36:57.366941 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:36:57.366947 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:36:57.366953 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:36:57.366959 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:36:57.366965 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:36:57.366971 | orchestrator | 2026-01-05 00:36:57.366978 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-01-05 00:36:57.366984 | orchestrator | Monday 05 January 2026 00:36:55 +0000 (0:00:00.998) 0:00:24.562 ******** 2026-01-05 00:36:57.366990 | orchestrator | ok: [testbed-manager] 2026-01-05 00:36:57.366996 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:36:57.367002 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:36:57.367008 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:36:57.367014 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:36:57.367025 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:36:57.367031 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:36:57.367037 | orchestrator | 2026-01-05 00:36:57.367044 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-01-05 00:36:57.367050 | orchestrator | Monday 05 January 2026 00:36:55 +0000 (0:00:00.893) 0:00:25.455 ******** 2026-01-05 00:36:57.367056 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-01-05 00:36:57.367062 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-01-05 00:36:57.367068 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-01-05 00:36:57.367075 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-05 00:36:57.367081 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-01-05 00:36:57.367087 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-01-05 00:36:57.367093 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-05 00:36:57.367099 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-01-05 00:36:57.367105 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-05 00:36:57.367111 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-05 00:36:57.367118 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-05 00:36:57.367124 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-01-05 00:36:57.367130 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-05 00:36:57.367136 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-05 00:36:57.367143 | orchestrator | 2026-01-05 00:36:57.367153 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-01-05 00:37:15.240681 | orchestrator | Monday 05 January 2026 00:36:57 +0000 (0:00:01.399) 0:00:26.855 ******** 2026-01-05 00:37:15.240816 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:37:15.240833 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:37:15.240846 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:37:15.240857 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:37:15.240868 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:37:15.240901 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:37:15.240912 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:37:15.240923 | orchestrator | 2026-01-05 00:37:15.240935 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-01-05 00:37:15.240947 | orchestrator | Monday 05 January 2026 00:36:58 +0000 (0:00:00.664) 0:00:27.519 ******** 2026-01-05 00:37:15.240959 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-4, testbed-node-3, testbed-node-2, testbed-node-5 2026-01-05 00:37:15.240972 | orchestrator | 2026-01-05 00:37:15.240983 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-01-05 00:37:15.240995 | orchestrator | Monday 05 January 2026 00:37:02 +0000 (0:00:04.834) 0:00:32.354 ******** 2026-01-05 00:37:15.241007 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-01-05 00:37:15.241019 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-01-05 00:37:15.241030 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-01-05 00:37:15.241042 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-01-05 00:37:15.241054 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-01-05 00:37:15.241065 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-01-05 00:37:15.241091 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-01-05 00:37:15.241109 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-01-05 00:37:15.241121 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-01-05 00:37:15.241132 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-01-05 00:37:15.241143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-01-05 00:37:15.241178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-01-05 00:37:15.241191 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-01-05 00:37:15.241203 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-01-05 00:37:15.241214 | orchestrator | 2026-01-05 00:37:15.241227 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-01-05 00:37:15.241241 | orchestrator | Monday 05 January 2026 00:37:08 +0000 (0:00:06.043) 0:00:38.398 ******** 2026-01-05 00:37:15.241261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-01-05 00:37:15.241280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-01-05 00:37:15.241300 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-01-05 00:37:15.241319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-01-05 00:37:15.241336 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-01-05 00:37:15.241353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-01-05 00:37:15.241370 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-01-05 00:37:15.241395 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-01-05 00:37:15.241414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-01-05 00:37:15.241432 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-01-05 00:37:15.241460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-01-05 00:37:15.241479 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-01-05 00:37:15.241514 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-01-05 00:37:21.722514 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-01-05 00:37:21.722639 | orchestrator | 2026-01-05 00:37:21.722656 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-01-05 00:37:21.722670 | orchestrator | Monday 05 January 2026 00:37:15 +0000 (0:00:06.331) 0:00:44.729 ******** 2026-01-05 00:37:21.722682 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:37:21.722744 | orchestrator | 2026-01-05 00:37:21.722766 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-01-05 00:37:21.722785 | orchestrator | Monday 05 January 2026 00:37:16 +0000 (0:00:01.325) 0:00:46.054 ******** 2026-01-05 00:37:21.722799 | orchestrator | ok: [testbed-manager] 2026-01-05 00:37:21.722812 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:37:21.722823 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:37:21.722834 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:37:21.722845 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:37:21.722855 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:37:21.722866 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:37:21.722877 | orchestrator | 2026-01-05 00:37:21.722888 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-01-05 00:37:21.722899 | orchestrator | Monday 05 January 2026 00:37:17 +0000 (0:00:01.208) 0:00:47.262 ******** 2026-01-05 00:37:21.722910 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-05 00:37:21.722922 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-05 00:37:21.722933 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-05 00:37:21.722944 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-05 00:37:21.722955 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-05 00:37:21.722966 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-05 00:37:21.722977 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-05 00:37:21.722987 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-05 00:37:21.722998 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:37:21.723010 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-05 00:37:21.723021 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-05 00:37:21.723031 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-05 00:37:21.723067 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-05 00:37:21.723078 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:37:21.723089 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-05 00:37:21.723100 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-05 00:37:21.723128 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-05 00:37:21.723139 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-05 00:37:21.723150 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:37:21.723161 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-05 00:37:21.723172 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-05 00:37:21.723183 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-05 00:37:21.723194 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-05 00:37:21.723204 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:37:21.723215 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-05 00:37:21.723226 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-05 00:37:21.723237 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-05 00:37:21.723247 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-05 00:37:21.723258 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:37:21.723269 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:37:21.723279 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-05 00:37:21.723290 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-05 00:37:21.723301 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-05 00:37:21.723312 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-05 00:37:21.723322 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:37:21.723333 | orchestrator | 2026-01-05 00:37:21.723344 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-01-05 00:37:21.723375 | orchestrator | Monday 05 January 2026 00:37:19 +0000 (0:00:02.090) 0:00:49.352 ******** 2026-01-05 00:37:21.723387 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:37:21.723398 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:37:21.723409 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:37:21.723420 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:37:21.723430 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:37:21.723441 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:37:21.723452 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:37:21.723463 | orchestrator | 2026-01-05 00:37:21.723474 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-01-05 00:37:21.723485 | orchestrator | Monday 05 January 2026 00:37:20 +0000 (0:00:00.699) 0:00:50.051 ******** 2026-01-05 00:37:21.723497 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:37:21.723508 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:37:21.723525 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:37:21.723543 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:37:21.723561 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:37:21.723579 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:37:21.723598 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:37:21.723611 | orchestrator | 2026-01-05 00:37:21.723622 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:37:21.723634 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-05 00:37:21.723656 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-05 00:37:21.723668 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-05 00:37:21.723679 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-05 00:37:21.723714 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-05 00:37:21.723727 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-05 00:37:21.723738 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-05 00:37:21.723749 | orchestrator | 2026-01-05 00:37:21.723760 | orchestrator | 2026-01-05 00:37:21.723771 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:37:21.723781 | orchestrator | Monday 05 January 2026 00:37:21 +0000 (0:00:00.750) 0:00:50.802 ******** 2026-01-05 00:37:21.723792 | orchestrator | =============================================================================== 2026-01-05 00:37:21.723803 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 6.33s 2026-01-05 00:37:21.723814 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 6.04s 2026-01-05 00:37:21.723825 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.83s 2026-01-05 00:37:21.723835 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.45s 2026-01-05 00:37:21.723853 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.87s 2026-01-05 00:37:21.723864 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.34s 2026-01-05 00:37:21.723875 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.09s 2026-01-05 00:37:21.723885 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 2.05s 2026-01-05 00:37:21.723896 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.88s 2026-01-05 00:37:21.723907 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.74s 2026-01-05 00:37:21.723918 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.72s 2026-01-05 00:37:21.723928 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.40s 2026-01-05 00:37:21.723939 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.33s 2026-01-05 00:37:21.723950 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.32s 2026-01-05 00:37:21.723961 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.27s 2026-01-05 00:37:21.723972 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.21s 2026-01-05 00:37:21.723982 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.20s 2026-01-05 00:37:21.723993 | orchestrator | osism.commons.network : Create required directories --------------------- 1.04s 2026-01-05 00:37:21.724004 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.00s 2026-01-05 00:37:21.724014 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.93s 2026-01-05 00:37:22.062541 | orchestrator | + osism apply wireguard 2026-01-05 00:37:34.283245 | orchestrator | 2026-01-05 00:37:34 | INFO  | Task f4d06f18-857c-4572-88b3-0d1712e3fef6 (wireguard) was prepared for execution. 2026-01-05 00:37:34.283406 | orchestrator | 2026-01-05 00:37:34 | INFO  | It takes a moment until task f4d06f18-857c-4572-88b3-0d1712e3fef6 (wireguard) has been started and output is visible here. 2026-01-05 00:37:55.579025 | orchestrator | 2026-01-05 00:37:55.579113 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-01-05 00:37:55.579120 | orchestrator | 2026-01-05 00:37:55.579124 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-01-05 00:37:55.579129 | orchestrator | Monday 05 January 2026 00:37:38 +0000 (0:00:00.223) 0:00:00.223 ******** 2026-01-05 00:37:55.579133 | orchestrator | ok: [testbed-manager] 2026-01-05 00:37:55.579138 | orchestrator | 2026-01-05 00:37:55.579146 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-01-05 00:37:55.579151 | orchestrator | Monday 05 January 2026 00:37:40 +0000 (0:00:01.667) 0:00:01.891 ******** 2026-01-05 00:37:55.579155 | orchestrator | changed: [testbed-manager] 2026-01-05 00:37:55.579159 | orchestrator | 2026-01-05 00:37:55.579163 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-01-05 00:37:55.579167 | orchestrator | Monday 05 January 2026 00:37:47 +0000 (0:00:07.054) 0:00:08.945 ******** 2026-01-05 00:37:55.579171 | orchestrator | changed: [testbed-manager] 2026-01-05 00:37:55.579175 | orchestrator | 2026-01-05 00:37:55.579179 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-01-05 00:37:55.579183 | orchestrator | Monday 05 January 2026 00:37:47 +0000 (0:00:00.567) 0:00:09.513 ******** 2026-01-05 00:37:55.579186 | orchestrator | changed: [testbed-manager] 2026-01-05 00:37:55.579190 | orchestrator | 2026-01-05 00:37:55.579194 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-01-05 00:37:55.579198 | orchestrator | Monday 05 January 2026 00:37:48 +0000 (0:00:00.450) 0:00:09.963 ******** 2026-01-05 00:37:55.579201 | orchestrator | ok: [testbed-manager] 2026-01-05 00:37:55.579205 | orchestrator | 2026-01-05 00:37:55.579209 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-01-05 00:37:55.579213 | orchestrator | Monday 05 January 2026 00:37:49 +0000 (0:00:00.732) 0:00:10.696 ******** 2026-01-05 00:37:55.579217 | orchestrator | ok: [testbed-manager] 2026-01-05 00:37:55.579220 | orchestrator | 2026-01-05 00:37:55.579224 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-01-05 00:37:55.579228 | orchestrator | Monday 05 January 2026 00:37:49 +0000 (0:00:00.432) 0:00:11.129 ******** 2026-01-05 00:37:55.579232 | orchestrator | ok: [testbed-manager] 2026-01-05 00:37:55.579235 | orchestrator | 2026-01-05 00:37:55.579239 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-01-05 00:37:55.579243 | orchestrator | Monday 05 January 2026 00:37:50 +0000 (0:00:00.449) 0:00:11.579 ******** 2026-01-05 00:37:55.579247 | orchestrator | changed: [testbed-manager] 2026-01-05 00:37:55.579250 | orchestrator | 2026-01-05 00:37:55.579254 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-01-05 00:37:55.579258 | orchestrator | Monday 05 January 2026 00:37:51 +0000 (0:00:01.255) 0:00:12.834 ******** 2026-01-05 00:37:55.579262 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-05 00:37:55.579266 | orchestrator | changed: [testbed-manager] 2026-01-05 00:37:55.579270 | orchestrator | 2026-01-05 00:37:55.579274 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-01-05 00:37:55.579278 | orchestrator | Monday 05 January 2026 00:37:52 +0000 (0:00:00.979) 0:00:13.813 ******** 2026-01-05 00:37:55.579281 | orchestrator | changed: [testbed-manager] 2026-01-05 00:37:55.579285 | orchestrator | 2026-01-05 00:37:55.579289 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-01-05 00:37:55.579293 | orchestrator | Monday 05 January 2026 00:37:54 +0000 (0:00:01.775) 0:00:15.589 ******** 2026-01-05 00:37:55.579297 | orchestrator | changed: [testbed-manager] 2026-01-05 00:37:55.579300 | orchestrator | 2026-01-05 00:37:55.579304 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:37:55.579309 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:37:55.579333 | orchestrator | 2026-01-05 00:37:55.579337 | orchestrator | 2026-01-05 00:37:55.579341 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:37:55.579344 | orchestrator | Monday 05 January 2026 00:37:55 +0000 (0:00:01.006) 0:00:16.595 ******** 2026-01-05 00:37:55.579348 | orchestrator | =============================================================================== 2026-01-05 00:37:55.579352 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 7.05s 2026-01-05 00:37:55.579356 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.78s 2026-01-05 00:37:55.579359 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.67s 2026-01-05 00:37:55.579363 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.26s 2026-01-05 00:37:55.579367 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 1.01s 2026-01-05 00:37:55.579370 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.98s 2026-01-05 00:37:55.579374 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.73s 2026-01-05 00:37:55.579378 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.57s 2026-01-05 00:37:55.579382 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.45s 2026-01-05 00:37:55.579385 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.45s 2026-01-05 00:37:55.579389 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.43s 2026-01-05 00:37:55.939922 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-01-05 00:37:55.974215 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-01-05 00:37:55.974271 | orchestrator | Dload Upload Total Spent Left Speed 2026-01-05 00:37:56.048361 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 188 0 --:--:-- --:--:-- --:--:-- 189 2026-01-05 00:37:56.062798 | orchestrator | + osism apply --environment custom workarounds 2026-01-05 00:37:58.195085 | orchestrator | 2026-01-05 00:37:58 | INFO  | Trying to run play workarounds in environment custom 2026-01-05 00:38:08.423493 | orchestrator | 2026-01-05 00:38:08 | INFO  | Task 5826264a-2a88-4f16-a0d5-fa37e512e2ee (workarounds) was prepared for execution. 2026-01-05 00:38:08.423645 | orchestrator | 2026-01-05 00:38:08 | INFO  | It takes a moment until task 5826264a-2a88-4f16-a0d5-fa37e512e2ee (workarounds) has been started and output is visible here. 2026-01-05 00:38:34.926195 | orchestrator | 2026-01-05 00:38:34.926326 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 00:38:34.926347 | orchestrator | 2026-01-05 00:38:34.926361 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-01-05 00:38:34.926377 | orchestrator | Monday 05 January 2026 00:38:12 +0000 (0:00:00.133) 0:00:00.133 ******** 2026-01-05 00:38:34.926392 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-01-05 00:38:34.926407 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-01-05 00:38:34.926422 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-01-05 00:38:34.926437 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-01-05 00:38:34.926452 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-01-05 00:38:34.926466 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-01-05 00:38:34.926480 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-01-05 00:38:34.926495 | orchestrator | 2026-01-05 00:38:34.926509 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-01-05 00:38:34.926524 | orchestrator | 2026-01-05 00:38:34.926539 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-01-05 00:38:34.926581 | orchestrator | Monday 05 January 2026 00:38:13 +0000 (0:00:00.833) 0:00:00.966 ******** 2026-01-05 00:38:34.926597 | orchestrator | ok: [testbed-manager] 2026-01-05 00:38:34.926613 | orchestrator | 2026-01-05 00:38:34.926628 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-01-05 00:38:34.926643 | orchestrator | 2026-01-05 00:38:34.926734 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-01-05 00:38:34.926752 | orchestrator | Monday 05 January 2026 00:38:15 +0000 (0:00:02.470) 0:00:03.437 ******** 2026-01-05 00:38:34.926767 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:38:34.926782 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:38:34.926797 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:38:34.926812 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:38:34.926822 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:38:34.926833 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:38:34.926845 | orchestrator | 2026-01-05 00:38:34.926860 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-01-05 00:38:34.926875 | orchestrator | 2026-01-05 00:38:34.926886 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-01-05 00:38:34.926901 | orchestrator | Monday 05 January 2026 00:38:17 +0000 (0:00:01.938) 0:00:05.375 ******** 2026-01-05 00:38:34.926922 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-05 00:38:34.926941 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-05 00:38:34.926956 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-05 00:38:34.926980 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-05 00:38:34.926994 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-05 00:38:34.927008 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-05 00:38:34.927024 | orchestrator | 2026-01-05 00:38:34.927038 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-01-05 00:38:34.927053 | orchestrator | Monday 05 January 2026 00:38:19 +0000 (0:00:01.572) 0:00:06.948 ******** 2026-01-05 00:38:34.927068 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:38:34.927084 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:38:34.927098 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:38:34.927112 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:38:34.927127 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:38:34.927141 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:38:34.927156 | orchestrator | 2026-01-05 00:38:34.927171 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-01-05 00:38:34.927184 | orchestrator | Monday 05 January 2026 00:38:23 +0000 (0:00:04.077) 0:00:11.026 ******** 2026-01-05 00:38:34.927197 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:38:34.927210 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:38:34.927223 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:38:34.927236 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:38:34.927251 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:38:34.927266 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:38:34.927281 | orchestrator | 2026-01-05 00:38:34.927296 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-01-05 00:38:34.927310 | orchestrator | 2026-01-05 00:38:34.927325 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-01-05 00:38:34.927334 | orchestrator | Monday 05 January 2026 00:38:24 +0000 (0:00:00.758) 0:00:11.784 ******** 2026-01-05 00:38:34.927343 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:38:34.927351 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:38:34.927360 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:38:34.927381 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:38:34.927396 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:38:34.927410 | orchestrator | changed: [testbed-manager] 2026-01-05 00:38:34.927423 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:38:34.927437 | orchestrator | 2026-01-05 00:38:34.927451 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-01-05 00:38:34.927464 | orchestrator | Monday 05 January 2026 00:38:25 +0000 (0:00:01.640) 0:00:13.425 ******** 2026-01-05 00:38:34.927479 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:38:34.927493 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:38:34.927507 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:38:34.927522 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:38:34.927537 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:38:34.927552 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:38:34.927588 | orchestrator | changed: [testbed-manager] 2026-01-05 00:38:34.927603 | orchestrator | 2026-01-05 00:38:34.927617 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-01-05 00:38:34.927631 | orchestrator | Monday 05 January 2026 00:38:27 +0000 (0:00:01.793) 0:00:15.219 ******** 2026-01-05 00:38:34.927644 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:38:34.927681 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:38:34.927695 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:38:34.927710 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:38:34.927723 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:38:34.927736 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:38:34.927750 | orchestrator | ok: [testbed-manager] 2026-01-05 00:38:34.927763 | orchestrator | 2026-01-05 00:38:34.927777 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-01-05 00:38:34.927791 | orchestrator | Monday 05 January 2026 00:38:29 +0000 (0:00:01.666) 0:00:16.885 ******** 2026-01-05 00:38:34.927804 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:38:34.927817 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:38:34.927832 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:38:34.927845 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:38:34.927860 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:38:34.927875 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:38:34.927889 | orchestrator | changed: [testbed-manager] 2026-01-05 00:38:34.927903 | orchestrator | 2026-01-05 00:38:34.927917 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-01-05 00:38:34.927930 | orchestrator | Monday 05 January 2026 00:38:31 +0000 (0:00:01.967) 0:00:18.853 ******** 2026-01-05 00:38:34.927943 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:38:34.927957 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:38:34.927971 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:38:34.927984 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:38:34.927997 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:38:34.928011 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:38:34.928024 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:38:34.928038 | orchestrator | 2026-01-05 00:38:34.928052 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-01-05 00:38:34.928067 | orchestrator | 2026-01-05 00:38:34.928081 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-01-05 00:38:34.928096 | orchestrator | Monday 05 January 2026 00:38:31 +0000 (0:00:00.655) 0:00:19.508 ******** 2026-01-05 00:38:34.928109 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:38:34.928125 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:38:34.928139 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:38:34.928154 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:38:34.928169 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:38:34.928184 | orchestrator | ok: [testbed-manager] 2026-01-05 00:38:34.928199 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:38:34.928214 | orchestrator | 2026-01-05 00:38:34.928229 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:38:34.928246 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:38:34.928287 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:38:34.928304 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:38:34.928320 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:38:34.928334 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:38:34.928349 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:38:34.928364 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:38:34.928380 | orchestrator | 2026-01-05 00:38:34.928396 | orchestrator | 2026-01-05 00:38:34.928411 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:38:34.928427 | orchestrator | Monday 05 January 2026 00:38:34 +0000 (0:00:03.092) 0:00:22.600 ******** 2026-01-05 00:38:34.928443 | orchestrator | =============================================================================== 2026-01-05 00:38:34.928459 | orchestrator | Run update-ca-certificates ---------------------------------------------- 4.08s 2026-01-05 00:38:34.928476 | orchestrator | Install python3-docker -------------------------------------------------- 3.09s 2026-01-05 00:38:34.928492 | orchestrator | Apply netplan configuration --------------------------------------------- 2.47s 2026-01-05 00:38:34.928507 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.97s 2026-01-05 00:38:34.928521 | orchestrator | Apply netplan configuration --------------------------------------------- 1.94s 2026-01-05 00:38:34.928536 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.79s 2026-01-05 00:38:34.928550 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.67s 2026-01-05 00:38:34.928564 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.64s 2026-01-05 00:38:34.928579 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.57s 2026-01-05 00:38:34.928594 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.83s 2026-01-05 00:38:34.928609 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.76s 2026-01-05 00:38:34.928640 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.66s 2026-01-05 00:38:35.687505 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-01-05 00:38:47.854911 | orchestrator | 2026-01-05 00:38:47 | INFO  | Task d968b6cd-04a7-43b3-a9b2-09bcd99968c8 (reboot) was prepared for execution. 2026-01-05 00:38:47.855012 | orchestrator | 2026-01-05 00:38:47 | INFO  | It takes a moment until task d968b6cd-04a7-43b3-a9b2-09bcd99968c8 (reboot) has been started and output is visible here. 2026-01-05 00:38:58.434214 | orchestrator | 2026-01-05 00:38:58.434304 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-05 00:38:58.434313 | orchestrator | 2026-01-05 00:38:58.434320 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-05 00:38:58.434325 | orchestrator | Monday 05 January 2026 00:38:52 +0000 (0:00:00.219) 0:00:00.219 ******** 2026-01-05 00:38:58.434331 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:38:58.434336 | orchestrator | 2026-01-05 00:38:58.434341 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-05 00:38:58.434346 | orchestrator | Monday 05 January 2026 00:38:52 +0000 (0:00:00.110) 0:00:00.330 ******** 2026-01-05 00:38:58.434367 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:38:58.434372 | orchestrator | 2026-01-05 00:38:58.434377 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-05 00:38:58.434394 | orchestrator | Monday 05 January 2026 00:38:53 +0000 (0:00:00.944) 0:00:01.275 ******** 2026-01-05 00:38:58.434399 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:38:58.434404 | orchestrator | 2026-01-05 00:38:58.434408 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-05 00:38:58.434413 | orchestrator | 2026-01-05 00:38:58.434418 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-05 00:38:58.434422 | orchestrator | Monday 05 January 2026 00:38:53 +0000 (0:00:00.138) 0:00:01.413 ******** 2026-01-05 00:38:58.434427 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:38:58.434432 | orchestrator | 2026-01-05 00:38:58.434436 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-05 00:38:58.434441 | orchestrator | Monday 05 January 2026 00:38:53 +0000 (0:00:00.115) 0:00:01.529 ******** 2026-01-05 00:38:58.434445 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:38:58.434450 | orchestrator | 2026-01-05 00:38:58.434455 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-05 00:38:58.434459 | orchestrator | Monday 05 January 2026 00:38:54 +0000 (0:00:00.703) 0:00:02.233 ******** 2026-01-05 00:38:58.434464 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:38:58.434468 | orchestrator | 2026-01-05 00:38:58.434473 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-05 00:38:58.434478 | orchestrator | 2026-01-05 00:38:58.434482 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-05 00:38:58.434487 | orchestrator | Monday 05 January 2026 00:38:54 +0000 (0:00:00.118) 0:00:02.352 ******** 2026-01-05 00:38:58.434491 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:38:58.434496 | orchestrator | 2026-01-05 00:38:58.434512 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-05 00:38:58.434517 | orchestrator | Monday 05 January 2026 00:38:54 +0000 (0:00:00.225) 0:00:02.577 ******** 2026-01-05 00:38:58.434522 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:38:58.434526 | orchestrator | 2026-01-05 00:38:58.434531 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-05 00:38:58.434535 | orchestrator | Monday 05 January 2026 00:38:55 +0000 (0:00:00.694) 0:00:03.271 ******** 2026-01-05 00:38:58.434540 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:38:58.434545 | orchestrator | 2026-01-05 00:38:58.434549 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-05 00:38:58.434554 | orchestrator | 2026-01-05 00:38:58.434558 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-05 00:38:58.434563 | orchestrator | Monday 05 January 2026 00:38:55 +0000 (0:00:00.110) 0:00:03.381 ******** 2026-01-05 00:38:58.434567 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:38:58.434572 | orchestrator | 2026-01-05 00:38:58.434576 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-05 00:38:58.434581 | orchestrator | Monday 05 January 2026 00:38:55 +0000 (0:00:00.118) 0:00:03.500 ******** 2026-01-05 00:38:58.434585 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:38:58.434590 | orchestrator | 2026-01-05 00:38:58.434594 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-05 00:38:58.434599 | orchestrator | Monday 05 January 2026 00:38:56 +0000 (0:00:00.683) 0:00:04.184 ******** 2026-01-05 00:38:58.434604 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:38:58.434608 | orchestrator | 2026-01-05 00:38:58.434613 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-05 00:38:58.434617 | orchestrator | 2026-01-05 00:38:58.434622 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-05 00:38:58.434626 | orchestrator | Monday 05 January 2026 00:38:56 +0000 (0:00:00.128) 0:00:04.313 ******** 2026-01-05 00:38:58.434682 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:38:58.434688 | orchestrator | 2026-01-05 00:38:58.434693 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-05 00:38:58.434697 | orchestrator | Monday 05 January 2026 00:38:56 +0000 (0:00:00.113) 0:00:04.426 ******** 2026-01-05 00:38:58.434702 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:38:58.434706 | orchestrator | 2026-01-05 00:38:58.434712 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-05 00:38:58.434716 | orchestrator | Monday 05 January 2026 00:38:57 +0000 (0:00:00.654) 0:00:05.081 ******** 2026-01-05 00:38:58.434721 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:38:58.434725 | orchestrator | 2026-01-05 00:38:58.434730 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-05 00:38:58.434734 | orchestrator | 2026-01-05 00:38:58.434739 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-05 00:38:58.434744 | orchestrator | Monday 05 January 2026 00:38:57 +0000 (0:00:00.118) 0:00:05.199 ******** 2026-01-05 00:38:58.434748 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:38:58.434753 | orchestrator | 2026-01-05 00:38:58.434758 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-05 00:38:58.434766 | orchestrator | Monday 05 January 2026 00:38:57 +0000 (0:00:00.103) 0:00:05.302 ******** 2026-01-05 00:38:58.434773 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:38:58.434781 | orchestrator | 2026-01-05 00:38:58.434788 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-05 00:38:58.434795 | orchestrator | Monday 05 January 2026 00:38:57 +0000 (0:00:00.708) 0:00:06.011 ******** 2026-01-05 00:38:58.434817 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:38:58.434825 | orchestrator | 2026-01-05 00:38:58.434832 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:38:58.434841 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:38:58.434851 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:38:58.434858 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:38:58.434865 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:38:58.434872 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:38:58.434880 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:38:58.434888 | orchestrator | 2026-01-05 00:38:58.434895 | orchestrator | 2026-01-05 00:38:58.434903 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:38:58.434909 | orchestrator | Monday 05 January 2026 00:38:58 +0000 (0:00:00.045) 0:00:06.057 ******** 2026-01-05 00:38:58.434913 | orchestrator | =============================================================================== 2026-01-05 00:38:58.434918 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.39s 2026-01-05 00:38:58.434923 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.79s 2026-01-05 00:38:58.434927 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.66s 2026-01-05 00:38:58.766252 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-01-05 00:39:10.976062 | orchestrator | 2026-01-05 00:39:10 | INFO  | Task 9bc3eb11-053b-4b1e-a700-72a31a66fa9f (wait-for-connection) was prepared for execution. 2026-01-05 00:39:10.976183 | orchestrator | 2026-01-05 00:39:10 | INFO  | It takes a moment until task 9bc3eb11-053b-4b1e-a700-72a31a66fa9f (wait-for-connection) has been started and output is visible here. 2026-01-05 00:39:26.935295 | orchestrator | 2026-01-05 00:39:26.935419 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-01-05 00:39:26.935436 | orchestrator | 2026-01-05 00:39:26.935449 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-01-05 00:39:26.935462 | orchestrator | Monday 05 January 2026 00:39:14 +0000 (0:00:00.209) 0:00:00.209 ******** 2026-01-05 00:39:26.935474 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:39:26.935487 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:39:26.935498 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:39:26.935510 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:39:26.935522 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:39:26.935533 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:39:26.935545 | orchestrator | 2026-01-05 00:39:26.935556 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:39:26.935569 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:39:26.935582 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:39:26.935594 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:39:26.935607 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:39:26.935655 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:39:26.935677 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:39:26.935696 | orchestrator | 2026-01-05 00:39:26.935714 | orchestrator | 2026-01-05 00:39:26.935732 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:39:26.935744 | orchestrator | Monday 05 January 2026 00:39:26 +0000 (0:00:11.644) 0:00:11.854 ******** 2026-01-05 00:39:26.935755 | orchestrator | =============================================================================== 2026-01-05 00:39:26.935766 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.64s 2026-01-05 00:39:27.260815 | orchestrator | + osism apply hddtemp 2026-01-05 00:39:39.353304 | orchestrator | 2026-01-05 00:39:39 | INFO  | Task 5c80c510-f6da-44c2-92b4-aeff29e702b3 (hddtemp) was prepared for execution. 2026-01-05 00:39:39.353431 | orchestrator | 2026-01-05 00:39:39 | INFO  | It takes a moment until task 5c80c510-f6da-44c2-92b4-aeff29e702b3 (hddtemp) has been started and output is visible here. 2026-01-05 00:40:09.469858 | orchestrator | 2026-01-05 00:40:09.469954 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-01-05 00:40:09.469971 | orchestrator | 2026-01-05 00:40:09.469982 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-01-05 00:40:09.469993 | orchestrator | Monday 05 January 2026 00:39:43 +0000 (0:00:00.233) 0:00:00.233 ******** 2026-01-05 00:40:09.470003 | orchestrator | ok: [testbed-manager] 2026-01-05 00:40:09.470014 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:40:09.470080 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:40:09.470090 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:40:09.470100 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:40:09.470110 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:40:09.470120 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:40:09.470130 | orchestrator | 2026-01-05 00:40:09.470140 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-01-05 00:40:09.470150 | orchestrator | Monday 05 January 2026 00:39:44 +0000 (0:00:00.627) 0:00:00.860 ******** 2026-01-05 00:40:09.470183 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:40:09.470196 | orchestrator | 2026-01-05 00:40:09.470206 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-01-05 00:40:09.470216 | orchestrator | Monday 05 January 2026 00:39:45 +0000 (0:00:01.097) 0:00:01.958 ******** 2026-01-05 00:40:09.470225 | orchestrator | ok: [testbed-manager] 2026-01-05 00:40:09.470236 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:40:09.470246 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:40:09.470256 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:40:09.470265 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:40:09.470275 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:40:09.470285 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:40:09.470294 | orchestrator | 2026-01-05 00:40:09.470304 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-01-05 00:40:09.470314 | orchestrator | Monday 05 January 2026 00:39:47 +0000 (0:00:02.237) 0:00:04.195 ******** 2026-01-05 00:40:09.470323 | orchestrator | changed: [testbed-manager] 2026-01-05 00:40:09.470333 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:40:09.470343 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:40:09.470353 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:40:09.470362 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:40:09.470371 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:40:09.470381 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:40:09.470391 | orchestrator | 2026-01-05 00:40:09.470400 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-01-05 00:40:09.470426 | orchestrator | Monday 05 January 2026 00:39:48 +0000 (0:00:01.191) 0:00:05.386 ******** 2026-01-05 00:40:09.470437 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:40:09.470448 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:40:09.470460 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:40:09.470472 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:40:09.470483 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:40:09.470493 | orchestrator | ok: [testbed-manager] 2026-01-05 00:40:09.470505 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:40:09.470516 | orchestrator | 2026-01-05 00:40:09.470528 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-01-05 00:40:09.470539 | orchestrator | Monday 05 January 2026 00:39:49 +0000 (0:00:01.186) 0:00:06.573 ******** 2026-01-05 00:40:09.470550 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:40:09.470566 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:40:09.470583 | orchestrator | changed: [testbed-manager] 2026-01-05 00:40:09.470619 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:40:09.470638 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:40:09.470656 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:40:09.470672 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:40:09.470691 | orchestrator | 2026-01-05 00:40:09.470709 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-01-05 00:40:09.470724 | orchestrator | Monday 05 January 2026 00:39:50 +0000 (0:00:00.869) 0:00:07.442 ******** 2026-01-05 00:40:09.470736 | orchestrator | changed: [testbed-manager] 2026-01-05 00:40:09.470747 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:40:09.470759 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:40:09.470770 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:40:09.470781 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:40:09.470791 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:40:09.470801 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:40:09.470810 | orchestrator | 2026-01-05 00:40:09.470820 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-01-05 00:40:09.470830 | orchestrator | Monday 05 January 2026 00:40:05 +0000 (0:00:14.851) 0:00:22.294 ******** 2026-01-05 00:40:09.470840 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:40:09.470859 | orchestrator | 2026-01-05 00:40:09.470869 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-01-05 00:40:09.470879 | orchestrator | Monday 05 January 2026 00:40:07 +0000 (0:00:01.507) 0:00:23.801 ******** 2026-01-05 00:40:09.470889 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:40:09.470899 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:40:09.470908 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:40:09.470918 | orchestrator | changed: [testbed-manager] 2026-01-05 00:40:09.470928 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:40:09.470937 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:40:09.470947 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:40:09.470956 | orchestrator | 2026-01-05 00:40:09.470966 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:40:09.470976 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:40:09.471003 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:40:09.471014 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:40:09.471024 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:40:09.471034 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:40:09.471044 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:40:09.471053 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:40:09.471063 | orchestrator | 2026-01-05 00:40:09.471073 | orchestrator | 2026-01-05 00:40:09.471083 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:40:09.471092 | orchestrator | Monday 05 January 2026 00:40:09 +0000 (0:00:01.931) 0:00:25.732 ******** 2026-01-05 00:40:09.471102 | orchestrator | =============================================================================== 2026-01-05 00:40:09.471112 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 14.85s 2026-01-05 00:40:09.471122 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.24s 2026-01-05 00:40:09.471131 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.93s 2026-01-05 00:40:09.471141 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.51s 2026-01-05 00:40:09.471150 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.19s 2026-01-05 00:40:09.471160 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.19s 2026-01-05 00:40:09.471169 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.10s 2026-01-05 00:40:09.471179 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.87s 2026-01-05 00:40:09.471194 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.63s 2026-01-05 00:40:09.813053 | orchestrator | ++ semver 9.5.0 7.1.1 2026-01-05 00:40:09.864778 | orchestrator | + [[ 1 -ge 0 ]] 2026-01-05 00:40:09.864910 | orchestrator | + sudo systemctl restart manager.service 2026-01-05 00:40:23.791708 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-01-05 00:40:23.792037 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-01-05 00:40:23.792096 | orchestrator | + local max_attempts=60 2026-01-05 00:40:23.792112 | orchestrator | + local name=ceph-ansible 2026-01-05 00:40:23.792123 | orchestrator | + local attempt_num=1 2026-01-05 00:40:23.792149 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-05 00:40:23.835684 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-05 00:40:23.835791 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-05 00:40:23.835805 | orchestrator | + sleep 5 2026-01-05 00:40:28.839871 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-05 00:40:28.883520 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-05 00:40:28.883654 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-05 00:40:28.883667 | orchestrator | + sleep 5 2026-01-05 00:40:33.887480 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-05 00:40:33.918238 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-05 00:40:33.918375 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-05 00:40:33.918401 | orchestrator | + sleep 5 2026-01-05 00:40:38.923198 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-05 00:40:38.965889 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-05 00:40:38.965980 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-05 00:40:38.965994 | orchestrator | + sleep 5 2026-01-05 00:40:43.970335 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-05 00:40:44.010181 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-05 00:40:44.010281 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-05 00:40:44.010296 | orchestrator | + sleep 5 2026-01-05 00:40:49.014379 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-05 00:40:49.046739 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-05 00:40:49.046857 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-05 00:40:49.046875 | orchestrator | + sleep 5 2026-01-05 00:40:54.051575 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-05 00:40:54.092027 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-05 00:40:54.092148 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-05 00:40:54.092173 | orchestrator | + sleep 5 2026-01-05 00:40:59.097238 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-05 00:40:59.128227 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-05 00:40:59.128423 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-05 00:40:59.128445 | orchestrator | + sleep 5 2026-01-05 00:41:04.130726 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-05 00:41:04.171766 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-05 00:41:04.171859 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-05 00:41:04.171870 | orchestrator | + sleep 5 2026-01-05 00:41:09.175775 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-05 00:41:09.224745 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-05 00:41:09.224857 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-05 00:41:09.224873 | orchestrator | + sleep 5 2026-01-05 00:41:14.229386 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-05 00:41:14.264797 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-05 00:41:14.264880 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-05 00:41:14.264887 | orchestrator | + sleep 5 2026-01-05 00:41:19.269673 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-05 00:41:19.311868 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-05 00:41:19.311983 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-05 00:41:19.312000 | orchestrator | + sleep 5 2026-01-05 00:41:24.317125 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-05 00:41:24.358383 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-05 00:41:24.358515 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-05 00:41:24.358540 | orchestrator | + sleep 5 2026-01-05 00:41:29.364165 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-05 00:41:29.406673 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-05 00:41:29.406795 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-01-05 00:41:29.406810 | orchestrator | + local max_attempts=60 2026-01-05 00:41:29.406823 | orchestrator | + local name=kolla-ansible 2026-01-05 00:41:29.406835 | orchestrator | + local attempt_num=1 2026-01-05 00:41:29.407725 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-01-05 00:41:29.452068 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-05 00:41:29.452169 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-01-05 00:41:29.452181 | orchestrator | + local max_attempts=60 2026-01-05 00:41:29.452190 | orchestrator | + local name=osism-ansible 2026-01-05 00:41:29.452198 | orchestrator | + local attempt_num=1 2026-01-05 00:41:29.452216 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-01-05 00:41:29.480425 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-05 00:41:29.480534 | orchestrator | + [[ true == \t\r\u\e ]] 2026-01-05 00:41:29.480550 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-01-05 00:41:29.630267 | orchestrator | ARA in ceph-ansible already disabled. 2026-01-05 00:41:29.784260 | orchestrator | ARA in kolla-ansible already disabled. 2026-01-05 00:41:29.965865 | orchestrator | ARA in osism-ansible already disabled. 2026-01-05 00:41:30.136511 | orchestrator | ARA in osism-kubernetes already disabled. 2026-01-05 00:41:30.137054 | orchestrator | + osism apply gather-facts 2026-01-05 00:41:42.455944 | orchestrator | 2026-01-05 00:41:42 | INFO  | Task 8e451f44-daa7-4521-9e5e-cbbfe452dc6a (gather-facts) was prepared for execution. 2026-01-05 00:41:42.456085 | orchestrator | 2026-01-05 00:41:42 | INFO  | It takes a moment until task 8e451f44-daa7-4521-9e5e-cbbfe452dc6a (gather-facts) has been started and output is visible here. 2026-01-05 00:41:56.488408 | orchestrator | 2026-01-05 00:41:56.488545 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-05 00:41:56.488627 | orchestrator | 2026-01-05 00:41:56.488639 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-05 00:41:56.488651 | orchestrator | Monday 05 January 2026 00:41:46 +0000 (0:00:00.245) 0:00:00.245 ******** 2026-01-05 00:41:56.488663 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:41:56.488675 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:41:56.488686 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:41:56.488697 | orchestrator | ok: [testbed-manager] 2026-01-05 00:41:56.488708 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:41:56.488719 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:41:56.488730 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:41:56.488741 | orchestrator | 2026-01-05 00:41:56.488752 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-05 00:41:56.488765 | orchestrator | 2026-01-05 00:41:56.488784 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-05 00:41:56.488803 | orchestrator | Monday 05 January 2026 00:41:55 +0000 (0:00:09.033) 0:00:09.278 ******** 2026-01-05 00:41:56.488822 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:41:56.488841 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:41:56.488859 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:41:56.488877 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:41:56.488894 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:41:56.488910 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:41:56.488928 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:41:56.488947 | orchestrator | 2026-01-05 00:41:56.488966 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:41:56.488986 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:41:56.489009 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:41:56.489029 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:41:56.489045 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:41:56.489059 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:41:56.489102 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:41:56.489116 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 00:41:56.489129 | orchestrator | 2026-01-05 00:41:56.489141 | orchestrator | 2026-01-05 00:41:56.489154 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:41:56.489168 | orchestrator | Monday 05 January 2026 00:41:56 +0000 (0:00:00.517) 0:00:09.795 ******** 2026-01-05 00:41:56.489181 | orchestrator | =============================================================================== 2026-01-05 00:41:56.489193 | orchestrator | Gathers facts about hosts ----------------------------------------------- 9.03s 2026-01-05 00:41:56.489206 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.52s 2026-01-05 00:41:56.732979 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-01-05 00:41:56.744401 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-01-05 00:41:56.755019 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-01-05 00:41:56.765651 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-01-05 00:41:56.776007 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-01-05 00:41:56.794830 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-01-05 00:41:56.807165 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-01-05 00:41:56.820763 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-01-05 00:41:56.834013 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-01-05 00:41:56.844899 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-01-05 00:41:56.859036 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-01-05 00:41:56.876371 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-01-05 00:41:56.891472 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-01-05 00:41:56.909768 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-01-05 00:41:56.932210 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-01-05 00:41:56.945680 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-01-05 00:41:56.956723 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-01-05 00:41:56.970621 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-01-05 00:41:56.985502 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-01-05 00:41:57.004492 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-01-05 00:41:57.018229 | orchestrator | + [[ false == \t\r\u\e ]] 2026-01-05 00:41:57.129775 | orchestrator | ok: Runtime: 0:24:50.316906 2026-01-05 00:41:57.230439 | 2026-01-05 00:41:57.230597 | TASK [Deploy services] 2026-01-05 00:41:57.950223 | orchestrator | 2026-01-05 00:41:57.950437 | orchestrator | # DEPLOY SERVICES 2026-01-05 00:41:57.950460 | orchestrator | 2026-01-05 00:41:57.950472 | orchestrator | + set -e 2026-01-05 00:41:57.950483 | orchestrator | + echo 2026-01-05 00:41:57.950494 | orchestrator | + echo '# DEPLOY SERVICES' 2026-01-05 00:41:57.950506 | orchestrator | + echo 2026-01-05 00:41:57.950545 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-05 00:41:57.950597 | orchestrator | ++ export INTERACTIVE=false 2026-01-05 00:41:57.950610 | orchestrator | ++ INTERACTIVE=false 2026-01-05 00:41:57.950620 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-05 00:41:57.950639 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-05 00:41:57.950648 | orchestrator | + source /opt/manager-vars.sh 2026-01-05 00:41:57.950660 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-05 00:41:57.950670 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-05 00:41:57.950684 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-05 00:41:57.950693 | orchestrator | ++ CEPH_VERSION=reef 2026-01-05 00:41:57.950706 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-05 00:41:57.950715 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-05 00:41:57.950743 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-01-05 00:41:57.950752 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-01-05 00:41:57.950761 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-05 00:41:57.950771 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-05 00:41:57.950780 | orchestrator | ++ export ARA=false 2026-01-05 00:41:57.950789 | orchestrator | ++ ARA=false 2026-01-05 00:41:57.950797 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-05 00:41:57.950806 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-05 00:41:57.950815 | orchestrator | ++ export TEMPEST=false 2026-01-05 00:41:57.950824 | orchestrator | ++ TEMPEST=false 2026-01-05 00:41:57.950832 | orchestrator | ++ export IS_ZUUL=true 2026-01-05 00:41:57.950841 | orchestrator | ++ IS_ZUUL=true 2026-01-05 00:41:57.950850 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.95 2026-01-05 00:41:57.950858 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.95 2026-01-05 00:41:57.950868 | orchestrator | ++ export EXTERNAL_API=false 2026-01-05 00:41:57.950876 | orchestrator | ++ EXTERNAL_API=false 2026-01-05 00:41:57.950885 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-05 00:41:57.950893 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-05 00:41:57.950902 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-05 00:41:57.950911 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-05 00:41:57.950919 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-05 00:41:57.950934 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-05 00:41:57.950944 | orchestrator | + sh -c /opt/configuration/scripts/pull-images.sh 2026-01-05 00:41:57.958376 | orchestrator | + set -e 2026-01-05 00:41:57.958430 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-05 00:41:57.958455 | orchestrator | ++ export INTERACTIVE=false 2026-01-05 00:41:57.958476 | orchestrator | ++ INTERACTIVE=false 2026-01-05 00:41:57.958495 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-05 00:41:57.958514 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-05 00:41:57.958533 | orchestrator | + source /opt/manager-vars.sh 2026-01-05 00:41:57.958571 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-05 00:41:57.958586 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-05 00:41:57.958617 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-05 00:41:57.958638 | orchestrator | ++ CEPH_VERSION=reef 2026-01-05 00:41:57.958658 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-05 00:41:57.958671 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-05 00:41:57.958682 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-01-05 00:41:57.958693 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-01-05 00:41:57.958704 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-05 00:41:57.958715 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-05 00:41:57.958726 | orchestrator | ++ export ARA=false 2026-01-05 00:41:57.958737 | orchestrator | ++ ARA=false 2026-01-05 00:41:57.958748 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-05 00:41:57.958758 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-05 00:41:57.958769 | orchestrator | ++ export TEMPEST=false 2026-01-05 00:41:57.958782 | orchestrator | ++ TEMPEST=false 2026-01-05 00:41:57.958792 | orchestrator | ++ export IS_ZUUL=true 2026-01-05 00:41:57.958803 | orchestrator | ++ IS_ZUUL=true 2026-01-05 00:41:57.958814 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.95 2026-01-05 00:41:57.958825 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.95 2026-01-05 00:41:57.958836 | orchestrator | ++ export EXTERNAL_API=false 2026-01-05 00:41:57.958846 | orchestrator | ++ EXTERNAL_API=false 2026-01-05 00:41:57.958857 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-05 00:41:57.958868 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-05 00:41:57.958879 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-05 00:41:57.958890 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-05 00:41:57.958927 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-05 00:41:57.958938 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-05 00:41:57.958949 | orchestrator | + echo 2026-01-05 00:41:57.958960 | orchestrator | 2026-01-05 00:41:57.958972 | orchestrator | # PULL IMAGES 2026-01-05 00:41:57.958983 | orchestrator | 2026-01-05 00:41:57.958994 | orchestrator | + echo '# PULL IMAGES' 2026-01-05 00:41:57.959005 | orchestrator | + echo 2026-01-05 00:41:57.960039 | orchestrator | ++ semver 9.5.0 7.0.0 2026-01-05 00:41:58.014170 | orchestrator | + [[ 1 -ge 0 ]] 2026-01-05 00:41:58.014281 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-01-05 00:41:59.743988 | orchestrator | 2026-01-05 00:41:59 | INFO  | Trying to run play pull-images in environment custom 2026-01-05 00:42:09.829445 | orchestrator | 2026-01-05 00:42:09 | INFO  | Task 8ba6bb59-7d1c-471e-ab99-e4b0ec2179e4 (pull-images) was prepared for execution. 2026-01-05 00:42:09.829598 | orchestrator | 2026-01-05 00:42:09 | INFO  | Task 8ba6bb59-7d1c-471e-ab99-e4b0ec2179e4 is running in background. No more output. Check ARA for logs. 2026-01-05 00:42:10.085325 | orchestrator | + sh -c /opt/configuration/scripts/deploy/001-helpers.sh 2026-01-05 00:42:22.043242 | orchestrator | 2026-01-05 00:42:22 | INFO  | Task b40fb087-5e11-4f82-bf32-0345daf2f92b (cgit) was prepared for execution. 2026-01-05 00:42:22.043390 | orchestrator | 2026-01-05 00:42:22 | INFO  | Task b40fb087-5e11-4f82-bf32-0345daf2f92b is running in background. No more output. Check ARA for logs. 2026-01-05 00:42:34.418635 | orchestrator | 2026-01-05 00:42:34 | INFO  | Task 3e5986d9-4b2f-433d-b3a0-f9810d5d6d4c (dotfiles) was prepared for execution. 2026-01-05 00:42:34.418756 | orchestrator | 2026-01-05 00:42:34 | INFO  | Task 3e5986d9-4b2f-433d-b3a0-f9810d5d6d4c is running in background. No more output. Check ARA for logs. 2026-01-05 00:42:47.090292 | orchestrator | 2026-01-05 00:42:47 | INFO  | Task ad518a0e-648a-4ea0-8ad6-48a893137f62 (homer) was prepared for execution. 2026-01-05 00:42:47.090409 | orchestrator | 2026-01-05 00:42:47 | INFO  | Task ad518a0e-648a-4ea0-8ad6-48a893137f62 is running in background. No more output. Check ARA for logs. 2026-01-05 00:42:59.935986 | orchestrator | 2026-01-05 00:42:59 | INFO  | Task f11567dc-1f86-4c9e-851f-cc7cfb68767f (phpmyadmin) was prepared for execution. 2026-01-05 00:42:59.936114 | orchestrator | 2026-01-05 00:42:59 | INFO  | Task f11567dc-1f86-4c9e-851f-cc7cfb68767f is running in background. No more output. Check ARA for logs. 2026-01-05 00:43:12.436398 | orchestrator | 2026-01-05 00:43:12 | INFO  | Task b7ef4191-2b13-4959-b55e-5f842f98df5c (sosreport) was prepared for execution. 2026-01-05 00:43:12.436544 | orchestrator | 2026-01-05 00:43:12 | INFO  | Task b7ef4191-2b13-4959-b55e-5f842f98df5c is running in background. No more output. Check ARA for logs. 2026-01-05 00:43:12.675826 | orchestrator | + sh -c /opt/configuration/scripts/deploy/500-kubernetes.sh 2026-01-05 00:43:12.682310 | orchestrator | + set -e 2026-01-05 00:43:12.682350 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-05 00:43:12.682364 | orchestrator | ++ export INTERACTIVE=false 2026-01-05 00:43:12.682384 | orchestrator | ++ INTERACTIVE=false 2026-01-05 00:43:12.682398 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-05 00:43:12.682409 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-05 00:43:12.683041 | orchestrator | + source /opt/manager-vars.sh 2026-01-05 00:43:12.683155 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-05 00:43:12.683180 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-05 00:43:12.683200 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-05 00:43:12.683218 | orchestrator | ++ CEPH_VERSION=reef 2026-01-05 00:43:12.683289 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-05 00:43:12.683309 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-05 00:43:12.683328 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-01-05 00:43:12.683346 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-01-05 00:43:12.683364 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-05 00:43:12.683382 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-05 00:43:12.683401 | orchestrator | ++ export ARA=false 2026-01-05 00:43:12.683420 | orchestrator | ++ ARA=false 2026-01-05 00:43:12.683439 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-05 00:43:12.683497 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-05 00:43:12.683544 | orchestrator | ++ export TEMPEST=false 2026-01-05 00:43:12.683565 | orchestrator | ++ TEMPEST=false 2026-01-05 00:43:12.683583 | orchestrator | ++ export IS_ZUUL=true 2026-01-05 00:43:12.683601 | orchestrator | ++ IS_ZUUL=true 2026-01-05 00:43:12.683641 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.95 2026-01-05 00:43:12.683664 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.95 2026-01-05 00:43:12.683682 | orchestrator | ++ export EXTERNAL_API=false 2026-01-05 00:43:12.683698 | orchestrator | ++ EXTERNAL_API=false 2026-01-05 00:43:12.683716 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-05 00:43:12.683733 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-05 00:43:12.683751 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-05 00:43:12.683767 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-05 00:43:12.683784 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-05 00:43:12.683800 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-05 00:43:12.683927 | orchestrator | ++ semver 9.5.0 8.0.3 2026-01-05 00:43:12.739818 | orchestrator | + [[ 1 -ge 0 ]] 2026-01-05 00:43:12.739938 | orchestrator | + osism apply frr 2026-01-05 00:43:24.846326 | orchestrator | 2026-01-05 00:43:24 | INFO  | Task fad58b72-5b04-4952-b156-444ffbfe29cf (frr) was prepared for execution. 2026-01-05 00:43:24.846460 | orchestrator | 2026-01-05 00:43:24 | INFO  | It takes a moment until task fad58b72-5b04-4952-b156-444ffbfe29cf (frr) has been started and output is visible here. 2026-01-05 00:44:01.170850 | orchestrator | 2026-01-05 00:44:01.170956 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-01-05 00:44:01.170970 | orchestrator | 2026-01-05 00:44:01.170978 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-01-05 00:44:01.170994 | orchestrator | Monday 05 January 2026 00:43:32 +0000 (0:00:00.843) 0:00:00.843 ******** 2026-01-05 00:44:01.171002 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-01-05 00:44:01.171012 | orchestrator | 2026-01-05 00:44:01.171020 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-01-05 00:44:01.171027 | orchestrator | Monday 05 January 2026 00:43:32 +0000 (0:00:00.294) 0:00:01.138 ******** 2026-01-05 00:44:01.171035 | orchestrator | changed: [testbed-manager] 2026-01-05 00:44:01.171044 | orchestrator | 2026-01-05 00:44:01.171052 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-01-05 00:44:01.171062 | orchestrator | Monday 05 January 2026 00:43:35 +0000 (0:00:02.089) 0:00:03.228 ******** 2026-01-05 00:44:01.171070 | orchestrator | changed: [testbed-manager] 2026-01-05 00:44:01.171077 | orchestrator | 2026-01-05 00:44:01.171085 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-01-05 00:44:01.171092 | orchestrator | Monday 05 January 2026 00:43:48 +0000 (0:00:13.990) 0:00:17.218 ******** 2026-01-05 00:44:01.171100 | orchestrator | ok: [testbed-manager] 2026-01-05 00:44:01.171109 | orchestrator | 2026-01-05 00:44:01.171116 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-01-05 00:44:01.171124 | orchestrator | Monday 05 January 2026 00:43:50 +0000 (0:00:01.034) 0:00:18.253 ******** 2026-01-05 00:44:01.171131 | orchestrator | changed: [testbed-manager] 2026-01-05 00:44:01.171139 | orchestrator | 2026-01-05 00:44:01.171146 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-01-05 00:44:01.171154 | orchestrator | Monday 05 January 2026 00:43:51 +0000 (0:00:01.121) 0:00:19.375 ******** 2026-01-05 00:44:01.171161 | orchestrator | ok: [testbed-manager] 2026-01-05 00:44:01.171169 | orchestrator | 2026-01-05 00:44:01.171177 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-01-05 00:44:01.171185 | orchestrator | Monday 05 January 2026 00:43:52 +0000 (0:00:01.332) 0:00:20.707 ******** 2026-01-05 00:44:01.171193 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:44:01.171200 | orchestrator | 2026-01-05 00:44:01.171208 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-01-05 00:44:01.171215 | orchestrator | Monday 05 January 2026 00:43:52 +0000 (0:00:00.233) 0:00:20.941 ******** 2026-01-05 00:44:01.171241 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:44:01.171250 | orchestrator | 2026-01-05 00:44:01.171257 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-01-05 00:44:01.171265 | orchestrator | Monday 05 January 2026 00:43:52 +0000 (0:00:00.175) 0:00:21.117 ******** 2026-01-05 00:44:01.171272 | orchestrator | changed: [testbed-manager] 2026-01-05 00:44:01.171279 | orchestrator | 2026-01-05 00:44:01.171287 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-01-05 00:44:01.171294 | orchestrator | Monday 05 January 2026 00:43:53 +0000 (0:00:01.058) 0:00:22.176 ******** 2026-01-05 00:44:01.171302 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-01-05 00:44:01.171309 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-01-05 00:44:01.171319 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-01-05 00:44:01.171328 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-01-05 00:44:01.171337 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-01-05 00:44:01.171346 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-01-05 00:44:01.171354 | orchestrator | 2026-01-05 00:44:01.171363 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-01-05 00:44:01.171372 | orchestrator | Monday 05 January 2026 00:43:57 +0000 (0:00:03.661) 0:00:25.838 ******** 2026-01-05 00:44:01.171381 | orchestrator | ok: [testbed-manager] 2026-01-05 00:44:01.171389 | orchestrator | 2026-01-05 00:44:01.171398 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-01-05 00:44:01.171407 | orchestrator | Monday 05 January 2026 00:43:59 +0000 (0:00:01.738) 0:00:27.577 ******** 2026-01-05 00:44:01.171416 | orchestrator | changed: [testbed-manager] 2026-01-05 00:44:01.171424 | orchestrator | 2026-01-05 00:44:01.171433 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:44:01.171442 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 00:44:01.171451 | orchestrator | 2026-01-05 00:44:01.171461 | orchestrator | 2026-01-05 00:44:01.171494 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:44:01.171505 | orchestrator | Monday 05 January 2026 00:44:00 +0000 (0:00:01.440) 0:00:29.017 ******** 2026-01-05 00:44:01.171514 | orchestrator | =============================================================================== 2026-01-05 00:44:01.171523 | orchestrator | osism.services.frr : Install frr package ------------------------------- 13.99s 2026-01-05 00:44:01.171532 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 3.66s 2026-01-05 00:44:01.171541 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 2.09s 2026-01-05 00:44:01.171549 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.74s 2026-01-05 00:44:01.171558 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.44s 2026-01-05 00:44:01.171583 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.33s 2026-01-05 00:44:01.171592 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.12s 2026-01-05 00:44:01.171600 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.06s 2026-01-05 00:44:01.171608 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.03s 2026-01-05 00:44:01.171617 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.29s 2026-01-05 00:44:01.171626 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.23s 2026-01-05 00:44:01.171634 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.18s 2026-01-05 00:44:01.584922 | orchestrator | + osism apply kubernetes 2026-01-05 00:44:03.932289 | orchestrator | 2026-01-05 00:44:03 | INFO  | Task 84e8c548-7dfc-4375-b477-e1d2bde92926 (kubernetes) was prepared for execution. 2026-01-05 00:44:03.932468 | orchestrator | 2026-01-05 00:44:03 | INFO  | It takes a moment until task 84e8c548-7dfc-4375-b477-e1d2bde92926 (kubernetes) has been started and output is visible here. 2026-01-05 00:44:29.750547 | orchestrator | 2026-01-05 00:44:29.750751 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-01-05 00:44:29.750780 | orchestrator | 2026-01-05 00:44:29.750799 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-01-05 00:44:29.750819 | orchestrator | Monday 05 January 2026 00:44:09 +0000 (0:00:00.179) 0:00:00.179 ******** 2026-01-05 00:44:29.750839 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:44:29.750859 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:44:29.750878 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:44:29.750897 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:44:29.750914 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:44:29.750932 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:44:29.750949 | orchestrator | 2026-01-05 00:44:29.750968 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-01-05 00:44:29.750991 | orchestrator | Monday 05 January 2026 00:44:10 +0000 (0:00:00.775) 0:00:00.955 ******** 2026-01-05 00:44:29.751013 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:44:29.751036 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:29.751057 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:29.751078 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:44:29.751100 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:44:29.751122 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:44:29.751143 | orchestrator | 2026-01-05 00:44:29.751166 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-01-05 00:44:29.751192 | orchestrator | Monday 05 January 2026 00:44:10 +0000 (0:00:00.615) 0:00:01.570 ******** 2026-01-05 00:44:29.751214 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:44:29.751236 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:29.751256 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:29.751277 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:44:29.751299 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:44:29.751321 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:44:29.751343 | orchestrator | 2026-01-05 00:44:29.751364 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-01-05 00:44:29.751385 | orchestrator | Monday 05 January 2026 00:44:11 +0000 (0:00:00.637) 0:00:02.207 ******** 2026-01-05 00:44:29.751405 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:44:29.751425 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:44:29.751470 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:44:29.751499 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:44:29.751519 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:44:29.751537 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:44:29.751554 | orchestrator | 2026-01-05 00:44:29.751571 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-01-05 00:44:29.751589 | orchestrator | Monday 05 January 2026 00:44:13 +0000 (0:00:01.613) 0:00:03.821 ******** 2026-01-05 00:44:29.751608 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:44:29.751630 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:44:29.751646 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:44:29.751662 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:44:29.751678 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:44:29.751695 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:44:29.751711 | orchestrator | 2026-01-05 00:44:29.751728 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-01-05 00:44:29.751745 | orchestrator | Monday 05 January 2026 00:44:15 +0000 (0:00:01.976) 0:00:05.797 ******** 2026-01-05 00:44:29.751761 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:44:29.751812 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:44:29.751830 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:44:29.751847 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:44:29.751863 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:44:29.751880 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:44:29.751896 | orchestrator | 2026-01-05 00:44:29.751931 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-01-05 00:44:29.751948 | orchestrator | Monday 05 January 2026 00:44:16 +0000 (0:00:00.941) 0:00:06.739 ******** 2026-01-05 00:44:29.751964 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:44:29.751980 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:29.751998 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:29.752014 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:44:29.752031 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:44:29.752047 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:44:29.752062 | orchestrator | 2026-01-05 00:44:29.752079 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-01-05 00:44:29.752096 | orchestrator | Monday 05 January 2026 00:44:16 +0000 (0:00:00.577) 0:00:07.316 ******** 2026-01-05 00:44:29.752111 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:44:29.752128 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:29.752145 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:29.752161 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:44:29.752177 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:44:29.752193 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:44:29.752210 | orchestrator | 2026-01-05 00:44:29.752226 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-01-05 00:44:29.752242 | orchestrator | Monday 05 January 2026 00:44:17 +0000 (0:00:00.788) 0:00:08.105 ******** 2026-01-05 00:44:29.752259 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-05 00:44:29.752277 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-05 00:44:29.752294 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:44:29.752310 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-05 00:44:29.752326 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-05 00:44:29.752342 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:29.752358 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-05 00:44:29.752375 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-05 00:44:29.752391 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:29.752408 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-05 00:44:29.752489 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-05 00:44:29.752510 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:44:29.752527 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-05 00:44:29.752543 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-05 00:44:29.752559 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:44:29.752576 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-05 00:44:29.752593 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-05 00:44:29.752609 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:44:29.752624 | orchestrator | 2026-01-05 00:44:29.752640 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-01-05 00:44:29.752657 | orchestrator | Monday 05 January 2026 00:44:18 +0000 (0:00:00.594) 0:00:08.700 ******** 2026-01-05 00:44:29.752672 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:44:29.752689 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:29.752705 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:29.752738 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:44:29.752753 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:44:29.752769 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:44:29.752785 | orchestrator | 2026-01-05 00:44:29.752801 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-01-05 00:44:29.752820 | orchestrator | Monday 05 January 2026 00:44:19 +0000 (0:00:01.142) 0:00:09.842 ******** 2026-01-05 00:44:29.752837 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:44:29.752853 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:44:29.752869 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:44:29.752883 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:44:29.752900 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:44:29.752916 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:44:29.752932 | orchestrator | 2026-01-05 00:44:29.752947 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-01-05 00:44:29.752964 | orchestrator | Monday 05 January 2026 00:44:19 +0000 (0:00:00.757) 0:00:10.600 ******** 2026-01-05 00:44:29.752979 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:44:29.752995 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:44:29.753011 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:44:29.753027 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:44:29.753043 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:44:29.753058 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:44:29.753076 | orchestrator | 2026-01-05 00:44:29.753092 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-01-05 00:44:29.753110 | orchestrator | Monday 05 January 2026 00:44:25 +0000 (0:00:05.709) 0:00:16.310 ******** 2026-01-05 00:44:29.753126 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:44:29.753155 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:29.753173 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:29.753191 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:44:29.753209 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:44:29.753227 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:44:29.753245 | orchestrator | 2026-01-05 00:44:29.753264 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-01-05 00:44:29.753282 | orchestrator | Monday 05 January 2026 00:44:26 +0000 (0:00:01.224) 0:00:17.535 ******** 2026-01-05 00:44:29.753299 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:44:29.753317 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:29.753334 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:29.753351 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:44:29.753369 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:44:29.753386 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:44:29.753403 | orchestrator | 2026-01-05 00:44:29.753421 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-01-05 00:44:29.753500 | orchestrator | Monday 05 January 2026 00:44:28 +0000 (0:00:01.351) 0:00:18.886 ******** 2026-01-05 00:44:29.753522 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:44:29.753540 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:29.753558 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:29.753576 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:44:29.753593 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:44:29.753610 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:44:29.753627 | orchestrator | 2026-01-05 00:44:29.753644 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-01-05 00:44:29.753661 | orchestrator | Monday 05 January 2026 00:44:28 +0000 (0:00:00.648) 0:00:19.534 ******** 2026-01-05 00:44:29.753678 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-01-05 00:44:29.753705 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-01-05 00:44:29.753723 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:44:29.753740 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-01-05 00:44:29.753774 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-01-05 00:44:29.753792 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:44:29.753811 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-01-05 00:44:29.753828 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-01-05 00:44:29.753846 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:44:29.753910 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-01-05 00:44:29.753947 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-01-05 00:44:29.753965 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:44:29.753984 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-01-05 00:44:29.754003 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-01-05 00:44:29.754107 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:44:29.754122 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-01-05 00:44:29.754140 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-01-05 00:44:29.754160 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:44:29.754178 | orchestrator | 2026-01-05 00:44:29.754208 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-01-05 00:44:29.754300 | orchestrator | Monday 05 January 2026 00:44:29 +0000 (0:00:00.882) 0:00:20.417 ******** 2026-01-05 00:45:45.440767 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:45:45.440884 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:45:45.440900 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:45:45.440912 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:45:45.440924 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:45:45.440935 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:45:45.440947 | orchestrator | 2026-01-05 00:45:45.440960 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-01-05 00:45:45.440974 | orchestrator | Monday 05 January 2026 00:44:30 +0000 (0:00:00.645) 0:00:21.063 ******** 2026-01-05 00:45:45.440985 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:45:45.440995 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:45:45.441005 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:45:45.441014 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:45:45.441025 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:45:45.441034 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:45:45.441044 | orchestrator | 2026-01-05 00:45:45.441054 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-01-05 00:45:45.441063 | orchestrator | 2026-01-05 00:45:45.441074 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-01-05 00:45:45.441085 | orchestrator | Monday 05 January 2026 00:44:31 +0000 (0:00:01.270) 0:00:22.333 ******** 2026-01-05 00:45:45.441096 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:45:45.441107 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:45:45.441117 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:45:45.441127 | orchestrator | 2026-01-05 00:45:45.441137 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-01-05 00:45:45.441148 | orchestrator | Monday 05 January 2026 00:44:33 +0000 (0:00:01.816) 0:00:24.150 ******** 2026-01-05 00:45:45.441158 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:45:45.441168 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:45:45.441179 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:45:45.441190 | orchestrator | 2026-01-05 00:45:45.441200 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-01-05 00:45:45.441211 | orchestrator | Monday 05 January 2026 00:44:35 +0000 (0:00:01.913) 0:00:26.064 ******** 2026-01-05 00:45:45.441222 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:45:45.441232 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:45:45.441242 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:45:45.441253 | orchestrator | 2026-01-05 00:45:45.441264 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-01-05 00:45:45.441274 | orchestrator | Monday 05 January 2026 00:44:36 +0000 (0:00:01.011) 0:00:27.075 ******** 2026-01-05 00:45:45.441307 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:45:45.441321 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:45:45.441331 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:45:45.441342 | orchestrator | 2026-01-05 00:45:45.441353 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-01-05 00:45:45.441363 | orchestrator | Monday 05 January 2026 00:44:37 +0000 (0:00:00.688) 0:00:27.764 ******** 2026-01-05 00:45:45.441397 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:45:45.441409 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:45:45.441419 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:45:45.441430 | orchestrator | 2026-01-05 00:45:45.441441 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-01-05 00:45:45.441474 | orchestrator | Monday 05 January 2026 00:44:37 +0000 (0:00:00.358) 0:00:28.123 ******** 2026-01-05 00:45:45.441484 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:45:45.441491 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:45:45.441498 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:45:45.441505 | orchestrator | 2026-01-05 00:45:45.441513 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-01-05 00:45:45.441520 | orchestrator | Monday 05 January 2026 00:44:38 +0000 (0:00:00.913) 0:00:29.036 ******** 2026-01-05 00:45:45.441526 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:45:45.441532 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:45:45.441538 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:45:45.441544 | orchestrator | 2026-01-05 00:45:45.441551 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-01-05 00:45:45.441557 | orchestrator | Monday 05 January 2026 00:44:39 +0000 (0:00:01.477) 0:00:30.514 ******** 2026-01-05 00:45:45.441563 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:45:45.441571 | orchestrator | 2026-01-05 00:45:45.441577 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-01-05 00:45:45.441584 | orchestrator | Monday 05 January 2026 00:44:40 +0000 (0:00:00.503) 0:00:31.017 ******** 2026-01-05 00:45:45.441590 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:45:45.441597 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:45:45.441602 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:45:45.441608 | orchestrator | 2026-01-05 00:45:45.441613 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-01-05 00:45:45.441618 | orchestrator | Monday 05 January 2026 00:44:42 +0000 (0:00:01.899) 0:00:32.917 ******** 2026-01-05 00:45:45.441624 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:45:45.441629 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:45:45.441634 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:45:45.441640 | orchestrator | 2026-01-05 00:45:45.441645 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-01-05 00:45:45.441650 | orchestrator | Monday 05 January 2026 00:44:42 +0000 (0:00:00.521) 0:00:33.438 ******** 2026-01-05 00:45:45.441656 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:45:45.441661 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:45:45.441666 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:45:45.441671 | orchestrator | 2026-01-05 00:45:45.441677 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-01-05 00:45:45.441682 | orchestrator | Monday 05 January 2026 00:44:43 +0000 (0:00:00.745) 0:00:34.183 ******** 2026-01-05 00:45:45.441687 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:45:45.441693 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:45:45.441698 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:45:45.441704 | orchestrator | 2026-01-05 00:45:45.441709 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-01-05 00:45:45.441732 | orchestrator | Monday 05 January 2026 00:44:44 +0000 (0:00:01.204) 0:00:35.388 ******** 2026-01-05 00:45:45.441738 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:45:45.441750 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:45:45.441755 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:45:45.441761 | orchestrator | 2026-01-05 00:45:45.441766 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-01-05 00:45:45.441771 | orchestrator | Monday 05 January 2026 00:44:45 +0000 (0:00:00.304) 0:00:35.692 ******** 2026-01-05 00:45:45.441777 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:45:45.441782 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:45:45.441787 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:45:45.441793 | orchestrator | 2026-01-05 00:45:45.441798 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-01-05 00:45:45.441804 | orchestrator | Monday 05 January 2026 00:44:45 +0000 (0:00:00.552) 0:00:36.244 ******** 2026-01-05 00:45:45.441809 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:45:45.441814 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:45:45.441819 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:45:45.441825 | orchestrator | 2026-01-05 00:45:45.441834 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-01-05 00:45:45.441840 | orchestrator | Monday 05 January 2026 00:44:46 +0000 (0:00:01.340) 0:00:37.585 ******** 2026-01-05 00:45:45.441845 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:45:45.441851 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:45:45.441856 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:45:45.441861 | orchestrator | 2026-01-05 00:45:45.441867 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-01-05 00:45:45.441872 | orchestrator | Monday 05 January 2026 00:44:49 +0000 (0:00:02.776) 0:00:40.362 ******** 2026-01-05 00:45:45.441877 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:45:45.441883 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:45:45.441888 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:45:45.441897 | orchestrator | 2026-01-05 00:45:45.441903 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-01-05 00:45:45.441909 | orchestrator | Monday 05 January 2026 00:44:50 +0000 (0:00:00.367) 0:00:40.729 ******** 2026-01-05 00:45:45.441914 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-05 00:45:45.441922 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-05 00:45:45.441927 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-05 00:45:45.441932 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-05 00:45:45.441938 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-05 00:45:45.441943 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-05 00:45:45.441949 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-05 00:45:45.441954 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-05 00:45:45.441959 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-05 00:45:45.441965 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-05 00:45:45.441970 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-05 00:45:45.441979 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-05 00:45:45.441985 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-01-05 00:45:45.441990 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-01-05 00:45:45.441995 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-01-05 00:45:45.442001 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:45:45.442006 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:45:45.442011 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:45:45.442060 | orchestrator | 2026-01-05 00:45:45.442070 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-01-05 00:45:45.442075 | orchestrator | Monday 05 January 2026 00:45:44 +0000 (0:00:54.054) 0:01:34.784 ******** 2026-01-05 00:45:45.442080 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:45:45.442086 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:45:45.442091 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:45:45.442097 | orchestrator | 2026-01-05 00:45:45.442102 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-01-05 00:45:45.442107 | orchestrator | Monday 05 January 2026 00:45:44 +0000 (0:00:00.300) 0:01:35.085 ******** 2026-01-05 00:45:45.442118 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:46:28.207088 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:46:28.207220 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:46:28.207244 | orchestrator | 2026-01-05 00:46:28.207266 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-01-05 00:46:28.207285 | orchestrator | Monday 05 January 2026 00:45:45 +0000 (0:00:01.038) 0:01:36.123 ******** 2026-01-05 00:46:28.207304 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:46:28.207323 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:46:28.207392 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:46:28.207414 | orchestrator | 2026-01-05 00:46:28.207433 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-01-05 00:46:28.207453 | orchestrator | Monday 05 January 2026 00:45:46 +0000 (0:00:01.289) 0:01:37.413 ******** 2026-01-05 00:46:28.207470 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:46:28.207489 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:46:28.207508 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:46:28.207526 | orchestrator | 2026-01-05 00:46:28.207544 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-01-05 00:46:28.207564 | orchestrator | Monday 05 January 2026 00:46:13 +0000 (0:00:27.164) 0:02:04.578 ******** 2026-01-05 00:46:28.207583 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:46:28.207596 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:46:28.207607 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:46:28.207618 | orchestrator | 2026-01-05 00:46:28.207629 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-01-05 00:46:28.207641 | orchestrator | Monday 05 January 2026 00:46:14 +0000 (0:00:00.675) 0:02:05.254 ******** 2026-01-05 00:46:28.207653 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:46:28.207664 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:46:28.207674 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:46:28.207685 | orchestrator | 2026-01-05 00:46:28.207697 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-01-05 00:46:28.207708 | orchestrator | Monday 05 January 2026 00:46:15 +0000 (0:00:00.691) 0:02:05.945 ******** 2026-01-05 00:46:28.207718 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:46:28.207729 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:46:28.207740 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:46:28.207751 | orchestrator | 2026-01-05 00:46:28.207762 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-01-05 00:46:28.207804 | orchestrator | Monday 05 January 2026 00:46:15 +0000 (0:00:00.662) 0:02:06.607 ******** 2026-01-05 00:46:28.207815 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:46:28.207826 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:46:28.207836 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:46:28.207847 | orchestrator | 2026-01-05 00:46:28.207877 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-01-05 00:46:28.207889 | orchestrator | Monday 05 January 2026 00:46:16 +0000 (0:00:00.751) 0:02:07.358 ******** 2026-01-05 00:46:28.207899 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:46:28.207910 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:46:28.207920 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:46:28.207931 | orchestrator | 2026-01-05 00:46:28.207942 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-01-05 00:46:28.207953 | orchestrator | Monday 05 January 2026 00:46:16 +0000 (0:00:00.278) 0:02:07.637 ******** 2026-01-05 00:46:28.207964 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:46:28.207974 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:46:28.207985 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:46:28.207996 | orchestrator | 2026-01-05 00:46:28.208006 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-01-05 00:46:28.208017 | orchestrator | Monday 05 January 2026 00:46:17 +0000 (0:00:00.606) 0:02:08.244 ******** 2026-01-05 00:46:28.208028 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:46:28.208039 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:46:28.208050 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:46:28.208061 | orchestrator | 2026-01-05 00:46:28.208072 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-01-05 00:46:28.208083 | orchestrator | Monday 05 January 2026 00:46:18 +0000 (0:00:00.701) 0:02:08.945 ******** 2026-01-05 00:46:28.208093 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:46:28.208104 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:46:28.208115 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:46:28.208125 | orchestrator | 2026-01-05 00:46:28.208137 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-01-05 00:46:28.208148 | orchestrator | Monday 05 January 2026 00:46:19 +0000 (0:00:01.099) 0:02:10.045 ******** 2026-01-05 00:46:28.208161 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:46:28.208172 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:46:28.208182 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:46:28.208195 | orchestrator | 2026-01-05 00:46:28.208214 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-01-05 00:46:28.208233 | orchestrator | Monday 05 January 2026 00:46:20 +0000 (0:00:01.185) 0:02:11.231 ******** 2026-01-05 00:46:28.208251 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:46:28.208270 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:46:28.208286 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:46:28.208302 | orchestrator | 2026-01-05 00:46:28.208318 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-01-05 00:46:28.208358 | orchestrator | Monday 05 January 2026 00:46:20 +0000 (0:00:00.278) 0:02:11.509 ******** 2026-01-05 00:46:28.208378 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:46:28.208394 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:46:28.208410 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:46:28.208428 | orchestrator | 2026-01-05 00:46:28.208445 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-01-05 00:46:28.208462 | orchestrator | Monday 05 January 2026 00:46:21 +0000 (0:00:00.274) 0:02:11.784 ******** 2026-01-05 00:46:28.208479 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:46:28.208498 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:46:28.208516 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:46:28.208535 | orchestrator | 2026-01-05 00:46:28.208552 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-01-05 00:46:28.208570 | orchestrator | Monday 05 January 2026 00:46:21 +0000 (0:00:00.639) 0:02:12.423 ******** 2026-01-05 00:46:28.208606 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:46:28.208624 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:46:28.208671 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:46:28.208690 | orchestrator | 2026-01-05 00:46:28.208710 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-01-05 00:46:28.208732 | orchestrator | Monday 05 January 2026 00:46:22 +0000 (0:00:00.788) 0:02:13.212 ******** 2026-01-05 00:46:28.208751 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-05 00:46:28.208769 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-05 00:46:28.208787 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-05 00:46:28.208805 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-05 00:46:28.208823 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-05 00:46:28.208841 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-05 00:46:28.208859 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-05 00:46:28.208878 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-05 00:46:28.208896 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-05 00:46:28.208915 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-01-05 00:46:28.208933 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-05 00:46:28.208950 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-05 00:46:28.208969 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-01-05 00:46:28.208986 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-05 00:46:28.209005 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-05 00:46:28.209024 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-05 00:46:28.209042 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-05 00:46:28.209061 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-05 00:46:28.209078 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-05 00:46:28.209095 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-05 00:46:28.209115 | orchestrator | 2026-01-05 00:46:28.209135 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-01-05 00:46:28.209154 | orchestrator | 2026-01-05 00:46:28.209173 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-01-05 00:46:28.209191 | orchestrator | Monday 05 January 2026 00:46:25 +0000 (0:00:02.991) 0:02:16.203 ******** 2026-01-05 00:46:28.209210 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:46:28.209229 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:46:28.209247 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:46:28.209266 | orchestrator | 2026-01-05 00:46:28.209308 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-01-05 00:46:28.209327 | orchestrator | Monday 05 January 2026 00:46:25 +0000 (0:00:00.315) 0:02:16.519 ******** 2026-01-05 00:46:28.209382 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:46:28.209401 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:46:28.209419 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:46:28.209449 | orchestrator | 2026-01-05 00:46:28.209467 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-01-05 00:46:28.209485 | orchestrator | Monday 05 January 2026 00:46:26 +0000 (0:00:00.785) 0:02:17.305 ******** 2026-01-05 00:46:28.209503 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:46:28.209520 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:46:28.209539 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:46:28.209557 | orchestrator | 2026-01-05 00:46:28.209575 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-01-05 00:46:28.209594 | orchestrator | Monday 05 January 2026 00:46:26 +0000 (0:00:00.319) 0:02:17.624 ******** 2026-01-05 00:46:28.209612 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:46:28.209629 | orchestrator | 2026-01-05 00:46:28.209647 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-01-05 00:46:28.209665 | orchestrator | Monday 05 January 2026 00:46:27 +0000 (0:00:00.422) 0:02:18.047 ******** 2026-01-05 00:46:28.209683 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:46:28.209701 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:46:28.209720 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:46:28.209739 | orchestrator | 2026-01-05 00:46:28.209758 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-01-05 00:46:28.209776 | orchestrator | Monday 05 January 2026 00:46:27 +0000 (0:00:00.403) 0:02:18.450 ******** 2026-01-05 00:46:28.209793 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:46:28.209812 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:46:28.209831 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:46:28.209848 | orchestrator | 2026-01-05 00:46:28.209868 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-01-05 00:46:28.209886 | orchestrator | Monday 05 January 2026 00:46:28 +0000 (0:00:00.284) 0:02:18.735 ******** 2026-01-05 00:46:28.209917 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:48:08.068402 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:48:08.068983 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:48:08.069008 | orchestrator | 2026-01-05 00:48:08.069016 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-01-05 00:48:08.069022 | orchestrator | Monday 05 January 2026 00:46:28 +0000 (0:00:00.285) 0:02:19.020 ******** 2026-01-05 00:48:08.069027 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:48:08.069031 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:48:08.069036 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:48:08.069040 | orchestrator | 2026-01-05 00:48:08.069045 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-01-05 00:48:08.069050 | orchestrator | Monday 05 January 2026 00:46:28 +0000 (0:00:00.621) 0:02:19.641 ******** 2026-01-05 00:48:08.069055 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:48:08.069059 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:48:08.069064 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:48:08.069068 | orchestrator | 2026-01-05 00:48:08.069073 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-01-05 00:48:08.069077 | orchestrator | Monday 05 January 2026 00:46:30 +0000 (0:00:01.311) 0:02:20.952 ******** 2026-01-05 00:48:08.069082 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:48:08.069086 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:48:08.069091 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:48:08.069096 | orchestrator | 2026-01-05 00:48:08.069100 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-01-05 00:48:08.069105 | orchestrator | Monday 05 January 2026 00:46:31 +0000 (0:00:01.243) 0:02:22.196 ******** 2026-01-05 00:48:08.069109 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:48:08.069114 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:48:08.069118 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:48:08.069123 | orchestrator | 2026-01-05 00:48:08.069128 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-01-05 00:48:08.069147 | orchestrator | 2026-01-05 00:48:08.069153 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-01-05 00:48:08.069157 | orchestrator | Monday 05 January 2026 00:46:41 +0000 (0:00:10.123) 0:02:32.319 ******** 2026-01-05 00:48:08.069162 | orchestrator | ok: [testbed-manager] 2026-01-05 00:48:08.069167 | orchestrator | 2026-01-05 00:48:08.069172 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-01-05 00:48:08.069176 | orchestrator | Monday 05 January 2026 00:46:42 +0000 (0:00:00.856) 0:02:33.176 ******** 2026-01-05 00:48:08.069180 | orchestrator | changed: [testbed-manager] 2026-01-05 00:48:08.069185 | orchestrator | 2026-01-05 00:48:08.069190 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-05 00:48:08.069194 | orchestrator | Monday 05 January 2026 00:46:43 +0000 (0:00:00.677) 0:02:33.854 ******** 2026-01-05 00:48:08.069199 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-05 00:48:08.069203 | orchestrator | 2026-01-05 00:48:08.069208 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-05 00:48:08.069213 | orchestrator | Monday 05 January 2026 00:46:43 +0000 (0:00:00.610) 0:02:34.464 ******** 2026-01-05 00:48:08.069217 | orchestrator | changed: [testbed-manager] 2026-01-05 00:48:08.069222 | orchestrator | 2026-01-05 00:48:08.069226 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-01-05 00:48:08.069241 | orchestrator | Monday 05 January 2026 00:46:44 +0000 (0:00:00.863) 0:02:35.327 ******** 2026-01-05 00:48:08.069247 | orchestrator | changed: [testbed-manager] 2026-01-05 00:48:08.069251 | orchestrator | 2026-01-05 00:48:08.069256 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-01-05 00:48:08.069261 | orchestrator | Monday 05 January 2026 00:46:45 +0000 (0:00:00.564) 0:02:35.892 ******** 2026-01-05 00:48:08.069265 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-05 00:48:08.069270 | orchestrator | 2026-01-05 00:48:08.069274 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-01-05 00:48:08.069279 | orchestrator | Monday 05 January 2026 00:46:46 +0000 (0:00:01.559) 0:02:37.452 ******** 2026-01-05 00:48:08.069323 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-05 00:48:08.069332 | orchestrator | 2026-01-05 00:48:08.069347 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-01-05 00:48:08.069355 | orchestrator | Monday 05 January 2026 00:46:47 +0000 (0:00:00.827) 0:02:38.279 ******** 2026-01-05 00:48:08.069363 | orchestrator | changed: [testbed-manager] 2026-01-05 00:48:08.069373 | orchestrator | 2026-01-05 00:48:08.069378 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-01-05 00:48:08.069384 | orchestrator | Monday 05 January 2026 00:46:48 +0000 (0:00:00.413) 0:02:38.692 ******** 2026-01-05 00:48:08.069390 | orchestrator | changed: [testbed-manager] 2026-01-05 00:48:08.069395 | orchestrator | 2026-01-05 00:48:08.069402 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-01-05 00:48:08.069407 | orchestrator | 2026-01-05 00:48:08.069414 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-01-05 00:48:08.069420 | orchestrator | Monday 05 January 2026 00:46:48 +0000 (0:00:00.432) 0:02:39.125 ******** 2026-01-05 00:48:08.069427 | orchestrator | ok: [testbed-manager] 2026-01-05 00:48:08.069433 | orchestrator | 2026-01-05 00:48:08.069439 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-01-05 00:48:08.069443 | orchestrator | Monday 05 January 2026 00:46:48 +0000 (0:00:00.146) 0:02:39.271 ******** 2026-01-05 00:48:08.069447 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-01-05 00:48:08.069453 | orchestrator | 2026-01-05 00:48:08.069460 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-01-05 00:48:08.069466 | orchestrator | Monday 05 January 2026 00:46:48 +0000 (0:00:00.392) 0:02:39.664 ******** 2026-01-05 00:48:08.069473 | orchestrator | ok: [testbed-manager] 2026-01-05 00:48:08.069477 | orchestrator | 2026-01-05 00:48:08.069486 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-01-05 00:48:08.069490 | orchestrator | Monday 05 January 2026 00:46:49 +0000 (0:00:00.759) 0:02:40.424 ******** 2026-01-05 00:48:08.069495 | orchestrator | ok: [testbed-manager] 2026-01-05 00:48:08.069501 | orchestrator | 2026-01-05 00:48:08.069526 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-01-05 00:48:08.069533 | orchestrator | Monday 05 January 2026 00:46:51 +0000 (0:00:01.493) 0:02:41.918 ******** 2026-01-05 00:48:08.069540 | orchestrator | changed: [testbed-manager] 2026-01-05 00:48:08.069546 | orchestrator | 2026-01-05 00:48:08.069552 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-01-05 00:48:08.069558 | orchestrator | Monday 05 January 2026 00:46:52 +0000 (0:00:00.810) 0:02:42.728 ******** 2026-01-05 00:48:08.069564 | orchestrator | ok: [testbed-manager] 2026-01-05 00:48:08.069570 | orchestrator | 2026-01-05 00:48:08.069575 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-01-05 00:48:08.069581 | orchestrator | Monday 05 January 2026 00:46:52 +0000 (0:00:00.461) 0:02:43.190 ******** 2026-01-05 00:48:08.069587 | orchestrator | changed: [testbed-manager] 2026-01-05 00:48:08.069593 | orchestrator | 2026-01-05 00:48:08.069598 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-01-05 00:48:08.069604 | orchestrator | Monday 05 January 2026 00:47:00 +0000 (0:00:08.426) 0:02:51.616 ******** 2026-01-05 00:48:08.069611 | orchestrator | changed: [testbed-manager] 2026-01-05 00:48:08.069617 | orchestrator | 2026-01-05 00:48:08.069624 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-01-05 00:48:08.069630 | orchestrator | Monday 05 January 2026 00:47:14 +0000 (0:00:13.629) 0:03:05.245 ******** 2026-01-05 00:48:08.069636 | orchestrator | ok: [testbed-manager] 2026-01-05 00:48:08.069643 | orchestrator | 2026-01-05 00:48:08.069649 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-01-05 00:48:08.069656 | orchestrator | 2026-01-05 00:48:08.069662 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-01-05 00:48:08.069668 | orchestrator | Monday 05 January 2026 00:47:15 +0000 (0:00:00.744) 0:03:05.989 ******** 2026-01-05 00:48:08.069675 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:48:08.069682 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:48:08.069689 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:48:08.069695 | orchestrator | 2026-01-05 00:48:08.069701 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-01-05 00:48:08.069707 | orchestrator | Monday 05 January 2026 00:47:15 +0000 (0:00:00.338) 0:03:06.328 ******** 2026-01-05 00:48:08.069713 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:48:08.069719 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:48:08.069726 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:48:08.069732 | orchestrator | 2026-01-05 00:48:08.069738 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-01-05 00:48:08.069744 | orchestrator | Monday 05 January 2026 00:47:15 +0000 (0:00:00.322) 0:03:06.651 ******** 2026-01-05 00:48:08.069751 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:48:08.069758 | orchestrator | 2026-01-05 00:48:08.069765 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-01-05 00:48:08.069772 | orchestrator | Monday 05 January 2026 00:47:16 +0000 (0:00:00.552) 0:03:07.204 ******** 2026-01-05 00:48:08.069779 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-05 00:48:08.069786 | orchestrator | 2026-01-05 00:48:08.069792 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-01-05 00:48:08.069799 | orchestrator | Monday 05 January 2026 00:47:17 +0000 (0:00:00.897) 0:03:08.101 ******** 2026-01-05 00:48:08.069805 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 00:48:08.069812 | orchestrator | 2026-01-05 00:48:08.069850 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-01-05 00:48:08.069865 | orchestrator | Monday 05 January 2026 00:47:18 +0000 (0:00:00.781) 0:03:08.883 ******** 2026-01-05 00:48:08.069872 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:48:08.069878 | orchestrator | 2026-01-05 00:48:08.069885 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-01-05 00:48:08.069891 | orchestrator | Monday 05 January 2026 00:47:18 +0000 (0:00:00.108) 0:03:08.992 ******** 2026-01-05 00:48:08.069898 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 00:48:08.069905 | orchestrator | 2026-01-05 00:48:08.069912 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-01-05 00:48:08.069919 | orchestrator | Monday 05 January 2026 00:47:19 +0000 (0:00:00.941) 0:03:09.933 ******** 2026-01-05 00:48:08.069926 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:48:08.069932 | orchestrator | 2026-01-05 00:48:08.069938 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-01-05 00:48:08.069945 | orchestrator | Monday 05 January 2026 00:47:19 +0000 (0:00:00.107) 0:03:10.041 ******** 2026-01-05 00:48:08.069951 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:48:08.069958 | orchestrator | 2026-01-05 00:48:08.069965 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-01-05 00:48:08.069972 | orchestrator | Monday 05 January 2026 00:47:19 +0000 (0:00:00.120) 0:03:10.161 ******** 2026-01-05 00:48:08.069978 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:48:08.069985 | orchestrator | 2026-01-05 00:48:08.069991 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-01-05 00:48:08.070004 | orchestrator | Monday 05 January 2026 00:47:19 +0000 (0:00:00.115) 0:03:10.276 ******** 2026-01-05 00:48:08.070090 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:48:08.070101 | orchestrator | 2026-01-05 00:48:08.070107 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-01-05 00:48:08.070113 | orchestrator | Monday 05 January 2026 00:47:19 +0000 (0:00:00.126) 0:03:10.403 ******** 2026-01-05 00:48:08.070119 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-05 00:48:08.070125 | orchestrator | 2026-01-05 00:48:08.070132 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-01-05 00:48:08.070138 | orchestrator | Monday 05 January 2026 00:47:25 +0000 (0:00:05.374) 0:03:15.777 ******** 2026-01-05 00:48:08.070144 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-01-05 00:48:08.070150 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-01-05 00:48:08.070169 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-01-05 00:48:33.515939 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-01-05 00:48:33.516030 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-01-05 00:48:33.516036 | orchestrator | 2026-01-05 00:48:33.516042 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-01-05 00:48:33.516046 | orchestrator | Monday 05 January 2026 00:48:08 +0000 (0:00:42.970) 0:03:58.748 ******** 2026-01-05 00:48:33.516051 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 00:48:33.516056 | orchestrator | 2026-01-05 00:48:33.516060 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-01-05 00:48:33.516064 | orchestrator | Monday 05 January 2026 00:48:09 +0000 (0:00:01.402) 0:04:00.150 ******** 2026-01-05 00:48:33.516068 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-05 00:48:33.516072 | orchestrator | 2026-01-05 00:48:33.516076 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-01-05 00:48:33.516081 | orchestrator | Monday 05 January 2026 00:48:11 +0000 (0:00:01.675) 0:04:01.826 ******** 2026-01-05 00:48:33.516088 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-05 00:48:33.516094 | orchestrator | 2026-01-05 00:48:33.516100 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-01-05 00:48:33.516107 | orchestrator | Monday 05 January 2026 00:48:12 +0000 (0:00:01.299) 0:04:03.125 ******** 2026-01-05 00:48:33.516133 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:48:33.516139 | orchestrator | 2026-01-05 00:48:33.516144 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-01-05 00:48:33.516150 | orchestrator | Monday 05 January 2026 00:48:12 +0000 (0:00:00.128) 0:04:03.254 ******** 2026-01-05 00:48:33.516160 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-01-05 00:48:33.516169 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-01-05 00:48:33.516174 | orchestrator | 2026-01-05 00:48:33.516181 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-01-05 00:48:33.516186 | orchestrator | Monday 05 January 2026 00:48:14 +0000 (0:00:01.905) 0:04:05.160 ******** 2026-01-05 00:48:33.516193 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:48:33.516249 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:48:33.516256 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:48:33.516262 | orchestrator | 2026-01-05 00:48:33.516268 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-01-05 00:48:33.516274 | orchestrator | Monday 05 January 2026 00:48:14 +0000 (0:00:00.316) 0:04:05.476 ******** 2026-01-05 00:48:33.516280 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:48:33.516287 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:48:33.516293 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:48:33.516300 | orchestrator | 2026-01-05 00:48:33.516306 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-01-05 00:48:33.516312 | orchestrator | 2026-01-05 00:48:33.516317 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-01-05 00:48:33.516323 | orchestrator | Monday 05 January 2026 00:48:15 +0000 (0:00:00.881) 0:04:06.358 ******** 2026-01-05 00:48:33.516329 | orchestrator | ok: [testbed-manager] 2026-01-05 00:48:33.516336 | orchestrator | 2026-01-05 00:48:33.516343 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-01-05 00:48:33.516349 | orchestrator | Monday 05 January 2026 00:48:16 +0000 (0:00:00.338) 0:04:06.696 ******** 2026-01-05 00:48:33.516355 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-01-05 00:48:33.516359 | orchestrator | 2026-01-05 00:48:33.516363 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-01-05 00:48:33.516367 | orchestrator | Monday 05 January 2026 00:48:16 +0000 (0:00:00.261) 0:04:06.957 ******** 2026-01-05 00:48:33.516371 | orchestrator | changed: [testbed-manager] 2026-01-05 00:48:33.516374 | orchestrator | 2026-01-05 00:48:33.516378 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-01-05 00:48:33.516382 | orchestrator | 2026-01-05 00:48:33.516387 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-01-05 00:48:33.516391 | orchestrator | Monday 05 January 2026 00:48:22 +0000 (0:00:06.407) 0:04:13.365 ******** 2026-01-05 00:48:33.516395 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:48:33.516399 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:48:33.516405 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:48:33.516413 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:48:33.516422 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:48:33.516428 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:48:33.516433 | orchestrator | 2026-01-05 00:48:33.516439 | orchestrator | TASK [Manage labels] *********************************************************** 2026-01-05 00:48:33.516446 | orchestrator | Monday 05 January 2026 00:48:23 +0000 (0:00:00.608) 0:04:13.973 ******** 2026-01-05 00:48:33.516453 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-05 00:48:33.516459 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-05 00:48:33.516466 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-05 00:48:33.516473 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-05 00:48:33.516486 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-05 00:48:33.516491 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-05 00:48:33.516496 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-05 00:48:33.516500 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-05 00:48:33.516505 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-05 00:48:33.516525 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-05 00:48:33.516530 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-05 00:48:33.516535 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-05 00:48:33.516540 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-05 00:48:33.516544 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-05 00:48:33.516549 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-05 00:48:33.516579 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-05 00:48:33.516588 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-05 00:48:33.516593 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-05 00:48:33.516599 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-05 00:48:33.516606 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-05 00:48:33.516612 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-05 00:48:33.516618 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-05 00:48:33.516625 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-05 00:48:33.516631 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-05 00:48:33.516638 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-05 00:48:33.516645 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-05 00:48:33.516652 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-05 00:48:33.516658 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-05 00:48:33.516665 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-05 00:48:33.516673 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-05 00:48:33.516677 | orchestrator | 2026-01-05 00:48:33.516682 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-01-05 00:48:33.516687 | orchestrator | Monday 05 January 2026 00:48:32 +0000 (0:00:08.915) 0:04:22.889 ******** 2026-01-05 00:48:33.516692 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:48:33.516696 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:48:33.516701 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:48:33.516706 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:48:33.516710 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:48:33.516714 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:48:33.516719 | orchestrator | 2026-01-05 00:48:33.516723 | orchestrator | TASK [Manage taints] *********************************************************** 2026-01-05 00:48:33.516728 | orchestrator | Monday 05 January 2026 00:48:32 +0000 (0:00:00.572) 0:04:23.461 ******** 2026-01-05 00:48:33.516732 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:48:33.516743 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:48:33.516747 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:48:33.516752 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:48:33.516756 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:48:33.516760 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:48:33.516765 | orchestrator | 2026-01-05 00:48:33.516770 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:48:33.516775 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:48:33.516783 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-01-05 00:48:33.516789 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-05 00:48:33.516794 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-05 00:48:33.516800 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-05 00:48:33.516806 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-05 00:48:33.516812 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-05 00:48:33.516819 | orchestrator | 2026-01-05 00:48:33.516825 | orchestrator | 2026-01-05 00:48:33.516831 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:48:33.516837 | orchestrator | Monday 05 January 2026 00:48:33 +0000 (0:00:00.724) 0:04:24.186 ******** 2026-01-05 00:48:33.516849 | orchestrator | =============================================================================== 2026-01-05 00:48:33.959602 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 54.05s 2026-01-05 00:48:33.959717 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 42.97s 2026-01-05 00:48:33.959732 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 27.16s 2026-01-05 00:48:33.959742 | orchestrator | kubectl : Install required packages ------------------------------------ 13.63s 2026-01-05 00:48:33.959752 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.12s 2026-01-05 00:48:33.959762 | orchestrator | Manage labels ----------------------------------------------------------- 8.92s 2026-01-05 00:48:33.959771 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 8.43s 2026-01-05 00:48:33.959779 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 6.41s 2026-01-05 00:48:33.959788 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.71s 2026-01-05 00:48:33.959797 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.37s 2026-01-05 00:48:33.959806 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 2.99s 2026-01-05 00:48:33.959817 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.78s 2026-01-05 00:48:33.959827 | orchestrator | k3s_prereq : Enable IPv6 forwarding ------------------------------------- 1.98s 2026-01-05 00:48:33.959836 | orchestrator | k3s_server : Stop k3s-init ---------------------------------------------- 1.91s 2026-01-05 00:48:33.959846 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 1.91s 2026-01-05 00:48:33.959855 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 1.90s 2026-01-05 00:48:33.959865 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 1.82s 2026-01-05 00:48:33.959907 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 1.68s 2026-01-05 00:48:33.959917 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 1.61s 2026-01-05 00:48:33.959927 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.56s 2026-01-05 00:48:34.328967 | orchestrator | + osism apply copy-kubeconfig 2026-01-05 00:48:46.514377 | orchestrator | 2026-01-05 00:48:46 | INFO  | Task 20fa45f6-f851-4ee7-8fe8-1c98785a0411 (copy-kubeconfig) was prepared for execution. 2026-01-05 00:48:46.515420 | orchestrator | 2026-01-05 00:48:46 | INFO  | It takes a moment until task 20fa45f6-f851-4ee7-8fe8-1c98785a0411 (copy-kubeconfig) has been started and output is visible here. 2026-01-05 00:48:53.205319 | orchestrator | 2026-01-05 00:48:53.205472 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-01-05 00:48:53.205491 | orchestrator | 2026-01-05 00:48:53.205504 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-05 00:48:53.205517 | orchestrator | Monday 05 January 2026 00:48:50 +0000 (0:00:00.148) 0:00:00.148 ******** 2026-01-05 00:48:53.205529 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-05 00:48:53.205540 | orchestrator | 2026-01-05 00:48:53.205551 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-05 00:48:53.205587 | orchestrator | Monday 05 January 2026 00:48:51 +0000 (0:00:00.736) 0:00:00.884 ******** 2026-01-05 00:48:53.205599 | orchestrator | changed: [testbed-manager] 2026-01-05 00:48:53.205612 | orchestrator | 2026-01-05 00:48:53.205624 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-01-05 00:48:53.205636 | orchestrator | Monday 05 January 2026 00:48:52 +0000 (0:00:01.202) 0:00:02.087 ******** 2026-01-05 00:48:53.205652 | orchestrator | changed: [testbed-manager] 2026-01-05 00:48:53.205663 | orchestrator | 2026-01-05 00:48:53.205680 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:48:53.205692 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:48:53.205704 | orchestrator | 2026-01-05 00:48:53.205715 | orchestrator | 2026-01-05 00:48:53.205726 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:48:53.205737 | orchestrator | Monday 05 January 2026 00:48:52 +0000 (0:00:00.468) 0:00:02.556 ******** 2026-01-05 00:48:53.205748 | orchestrator | =============================================================================== 2026-01-05 00:48:53.205759 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.20s 2026-01-05 00:48:53.205770 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.74s 2026-01-05 00:48:53.205781 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.47s 2026-01-05 00:48:53.441132 | orchestrator | + osism apply k8s-dashboard 2026-01-05 00:48:55.253616 | orchestrator | 2026-01-05 00:48:55 | INFO  | Task 8773c171-77ee-45af-afe1-fa6f515fae2c (k8s-dashboard) was prepared for execution. 2026-01-05 00:48:55.253724 | orchestrator | 2026-01-05 00:48:55 | INFO  | It takes a moment until task 8773c171-77ee-45af-afe1-fa6f515fae2c (k8s-dashboard) has been started and output is visible here. 2026-01-05 00:49:33.275297 | orchestrator | 2026-01-05 00:49:33.275421 | orchestrator | PLAY [Apply kubernetes-dashboard] ********************************************** 2026-01-05 00:49:33.275433 | orchestrator | 2026-01-05 00:49:33.275440 | orchestrator | TASK [Deploy kubernetes-dashboard helm chart] ********************************** 2026-01-05 00:49:33.275447 | orchestrator | Monday 05 January 2026 00:48:59 +0000 (0:00:00.311) 0:00:00.311 ******** 2026-01-05 00:49:33.275454 | orchestrator | changed: [testbed-manager] 2026-01-05 00:49:33.275461 | orchestrator | 2026-01-05 00:49:33.275466 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:49:33.275473 | orchestrator | testbed-manager : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:49:33.275513 | orchestrator | 2026-01-05 00:49:33.275521 | orchestrator | 2026-01-05 00:49:33.275528 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:49:33.275534 | orchestrator | Monday 05 January 2026 00:49:33 +0000 (0:00:33.763) 0:00:34.075 ******** 2026-01-05 00:49:33.275540 | orchestrator | =============================================================================== 2026-01-05 00:49:33.275547 | orchestrator | Deploy kubernetes-dashboard helm chart --------------------------------- 33.76s 2026-01-05 00:49:33.515680 | orchestrator | + sh -c /opt/configuration/scripts/deploy/200-infrastructure.sh 2026-01-05 00:49:45.516333 | orchestrator | 2026-01-05 00:49:45 | INFO  | Task 0232e24e-3fa0-492b-9744-b2454d983ab8 (openstackclient) was prepared for execution. 2026-01-05 00:49:45.516426 | orchestrator | 2026-01-05 00:49:45 | INFO  | It takes a moment until task 0232e24e-3fa0-492b-9744-b2454d983ab8 (openstackclient) has been started and output is visible here. 2026-01-05 00:50:31.108034 | orchestrator | 2026-01-05 00:50:31.108230 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-01-05 00:50:31.108246 | orchestrator | 2026-01-05 00:50:31.108257 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-01-05 00:50:31.108266 | orchestrator | Monday 05 January 2026 00:49:49 +0000 (0:00:00.229) 0:00:00.229 ******** 2026-01-05 00:50:31.108278 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-01-05 00:50:31.108288 | orchestrator | 2026-01-05 00:50:31.108297 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-01-05 00:50:31.108306 | orchestrator | Monday 05 January 2026 00:49:50 +0000 (0:00:00.215) 0:00:00.444 ******** 2026-01-05 00:50:31.108315 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-01-05 00:50:31.108325 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-01-05 00:50:31.108335 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-01-05 00:50:31.108343 | orchestrator | 2026-01-05 00:50:31.108352 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-01-05 00:50:31.108361 | orchestrator | Monday 05 January 2026 00:49:51 +0000 (0:00:01.242) 0:00:01.687 ******** 2026-01-05 00:50:31.108371 | orchestrator | changed: [testbed-manager] 2026-01-05 00:50:31.108380 | orchestrator | 2026-01-05 00:50:31.108389 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-01-05 00:50:31.108398 | orchestrator | Monday 05 January 2026 00:49:52 +0000 (0:00:01.318) 0:00:03.005 ******** 2026-01-05 00:50:31.108407 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-01-05 00:50:31.108418 | orchestrator | ok: [testbed-manager] 2026-01-05 00:50:31.108428 | orchestrator | 2026-01-05 00:50:31.108437 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-01-05 00:50:31.108446 | orchestrator | Monday 05 January 2026 00:50:26 +0000 (0:00:33.722) 0:00:36.727 ******** 2026-01-05 00:50:31.108454 | orchestrator | changed: [testbed-manager] 2026-01-05 00:50:31.108463 | orchestrator | 2026-01-05 00:50:31.108472 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-01-05 00:50:31.108481 | orchestrator | Monday 05 January 2026 00:50:27 +0000 (0:00:00.853) 0:00:37.581 ******** 2026-01-05 00:50:31.108490 | orchestrator | ok: [testbed-manager] 2026-01-05 00:50:31.108499 | orchestrator | 2026-01-05 00:50:31.108508 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-01-05 00:50:31.108517 | orchestrator | Monday 05 January 2026 00:50:27 +0000 (0:00:00.620) 0:00:38.201 ******** 2026-01-05 00:50:31.108528 | orchestrator | changed: [testbed-manager] 2026-01-05 00:50:31.108539 | orchestrator | 2026-01-05 00:50:31.108550 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-01-05 00:50:31.108585 | orchestrator | Monday 05 January 2026 00:50:29 +0000 (0:00:01.401) 0:00:39.603 ******** 2026-01-05 00:50:31.108596 | orchestrator | changed: [testbed-manager] 2026-01-05 00:50:31.108607 | orchestrator | 2026-01-05 00:50:31.108617 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-01-05 00:50:31.108626 | orchestrator | Monday 05 January 2026 00:50:29 +0000 (0:00:00.634) 0:00:40.237 ******** 2026-01-05 00:50:31.108634 | orchestrator | changed: [testbed-manager] 2026-01-05 00:50:31.108643 | orchestrator | 2026-01-05 00:50:31.108652 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-01-05 00:50:31.108661 | orchestrator | Monday 05 January 2026 00:50:30 +0000 (0:00:00.545) 0:00:40.783 ******** 2026-01-05 00:50:31.108669 | orchestrator | ok: [testbed-manager] 2026-01-05 00:50:31.108678 | orchestrator | 2026-01-05 00:50:31.108687 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:50:31.108696 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 00:50:31.108706 | orchestrator | 2026-01-05 00:50:31.108715 | orchestrator | 2026-01-05 00:50:31.108730 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:50:31.108744 | orchestrator | Monday 05 January 2026 00:50:30 +0000 (0:00:00.398) 0:00:41.182 ******** 2026-01-05 00:50:31.108760 | orchestrator | =============================================================================== 2026-01-05 00:50:31.108774 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 33.72s 2026-01-05 00:50:31.108790 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.40s 2026-01-05 00:50:31.108803 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.32s 2026-01-05 00:50:31.108818 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.24s 2026-01-05 00:50:31.108834 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.85s 2026-01-05 00:50:31.108887 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.63s 2026-01-05 00:50:31.108896 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.62s 2026-01-05 00:50:31.108905 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.55s 2026-01-05 00:50:31.108913 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.40s 2026-01-05 00:50:31.108922 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.22s 2026-01-05 00:50:33.287937 | orchestrator | 2026-01-05 00:50:33 | INFO  | Task 25d69ba6-e091-4cb7-ab9f-99bd990792cd (common) was prepared for execution. 2026-01-05 00:50:33.288042 | orchestrator | 2026-01-05 00:50:33 | INFO  | It takes a moment until task 25d69ba6-e091-4cb7-ab9f-99bd990792cd (common) has been started and output is visible here. 2026-01-05 00:50:46.554047 | orchestrator | 2026-01-05 00:50:46.554163 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-01-05 00:50:46.554169 | orchestrator | 2026-01-05 00:50:46.554182 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-01-05 00:50:46.554186 | orchestrator | Monday 05 January 2026 00:50:37 +0000 (0:00:00.304) 0:00:00.304 ******** 2026-01-05 00:50:46.554190 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:50:46.554194 | orchestrator | 2026-01-05 00:50:46.554197 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-01-05 00:50:46.554201 | orchestrator | Monday 05 January 2026 00:50:39 +0000 (0:00:01.494) 0:00:01.799 ******** 2026-01-05 00:50:46.554204 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-05 00:50:46.554208 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-05 00:50:46.554211 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-05 00:50:46.554223 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-05 00:50:46.554227 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-05 00:50:46.554230 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-05 00:50:46.554233 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-05 00:50:46.554236 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-05 00:50:46.554239 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-05 00:50:46.554242 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-05 00:50:46.554246 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-05 00:50:46.554249 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-05 00:50:46.554252 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-05 00:50:46.554258 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-05 00:50:46.554261 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-05 00:50:46.554264 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-05 00:50:46.554268 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-05 00:50:46.554271 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-05 00:50:46.554274 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-05 00:50:46.554277 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-05 00:50:46.554280 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-05 00:50:46.554283 | orchestrator | 2026-01-05 00:50:46.554286 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-01-05 00:50:46.554290 | orchestrator | Monday 05 January 2026 00:50:42 +0000 (0:00:03.091) 0:00:04.891 ******** 2026-01-05 00:50:46.554293 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 00:50:46.554297 | orchestrator | 2026-01-05 00:50:46.554300 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-01-05 00:50:46.554304 | orchestrator | Monday 05 January 2026 00:50:43 +0000 (0:00:01.563) 0:00:06.455 ******** 2026-01-05 00:50:46.554308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:50:46.554313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:50:46.554329 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:50:46.554333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:50:46.554337 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:50:46.554340 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:50:46.554343 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:50:46.554346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:50:46.554411 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:50:46.554431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:50:47.646758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:50:47.647506 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:50:47.647540 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:50:47.647556 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:50:47.647562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:50:47.647572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:50:47.647586 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:50:47.647605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:50:47.647610 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:50:47.647615 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:50:47.647619 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:50:47.647623 | orchestrator | 2026-01-05 00:50:47.647629 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-01-05 00:50:47.647633 | orchestrator | Monday 05 January 2026 00:50:47 +0000 (0:00:03.499) 0:00:09.954 ******** 2026-01-05 00:50:47.647640 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 00:50:47.647645 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:50:47.647649 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:50:47.647657 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:50:47.647662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 00:50:47.647690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:50:48.241171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:50:48.241206 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:48.241213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 00:50:48.241219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:50:48.241233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:50:48.241238 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:50:48.241243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 00:50:48.241258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:50:48.241263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:50:48.241267 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:50:48.241279 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 00:50:48.241284 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:50:48.241289 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:50:48.241294 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:50:48.241298 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 00:50:48.241303 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:50:48.241311 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:50:48.241315 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:50:48.241320 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 00:50:48.241327 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:50:49.215490 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:50:49.215596 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:50:49.215611 | orchestrator | 2026-01-05 00:50:49.215623 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-01-05 00:50:49.215635 | orchestrator | Monday 05 January 2026 00:50:48 +0000 (0:00:00.897) 0:00:10.851 ******** 2026-01-05 00:50:49.215646 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 00:50:49.215683 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:50:49.215718 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:50:49.215729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 00:50:49.215740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:50:49.215750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:50:49.215760 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:50:49.215793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 00:50:49.215805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:50:49.215815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:50:49.215825 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:49.215842 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:50:49.215867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 00:50:49.215877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:50:49.215888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:50:49.215897 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:50:49.215907 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 00:50:49.215936 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:50:54.401002 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:50:54.401095 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:50:54.401104 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 00:50:54.401127 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:50:54.401133 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:50:54.401138 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:50:54.401142 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 00:50:54.401146 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:50:54.401150 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:50:54.401154 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:50:54.401158 | orchestrator | 2026-01-05 00:50:54.401163 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-01-05 00:50:54.401168 | orchestrator | Monday 05 January 2026 00:50:50 +0000 (0:00:01.913) 0:00:12.765 ******** 2026-01-05 00:50:54.401172 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:50:54.401176 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:54.401180 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:50:54.401184 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:50:54.401198 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:50:54.401202 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:50:54.401205 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:50:54.401209 | orchestrator | 2026-01-05 00:50:54.401213 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-01-05 00:50:54.401217 | orchestrator | Monday 05 January 2026 00:50:50 +0000 (0:00:00.731) 0:00:13.496 ******** 2026-01-05 00:50:54.401221 | orchestrator | skipping: [testbed-manager] 2026-01-05 00:50:54.401225 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:50:54.401245 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:50:54.401249 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:50:54.401253 | orchestrator | skipping: [testbed-node-3] 2026-01-05 00:50:54.401257 | orchestrator | skipping: [testbed-node-4] 2026-01-05 00:50:54.401260 | orchestrator | skipping: [testbed-node-5] 2026-01-05 00:50:54.401264 | orchestrator | 2026-01-05 00:50:54.401268 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-01-05 00:50:54.401272 | orchestrator | Monday 05 January 2026 00:50:51 +0000 (0:00:00.885) 0:00:14.382 ******** 2026-01-05 00:50:54.401276 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:50:54.401283 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:50:54.401287 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:50:54.401291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:50:54.401295 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:50:54.401299 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:50:54.401307 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:50:57.541751 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:50:57.541845 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:50:57.541866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:50:57.541873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:50:57.541880 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:50:57.541886 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:50:57.541927 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:50:57.541937 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:50:57.541943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:50:57.541950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:50:57.541957 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:50:57.541964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:50:57.541970 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:50:57.541977 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:50:57.541989 | orchestrator | 2026-01-05 00:50:57.541996 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-01-05 00:50:57.542001 | orchestrator | Monday 05 January 2026 00:50:55 +0000 (0:00:03.655) 0:00:18.037 ******** 2026-01-05 00:50:57.542005 | orchestrator | [WARNING]: Skipped 2026-01-05 00:50:57.542010 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-01-05 00:50:57.542098 | orchestrator | to this access issue: 2026-01-05 00:50:57.542105 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-01-05 00:50:57.542111 | orchestrator | directory 2026-01-05 00:50:57.542118 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-05 00:50:57.542125 | orchestrator | 2026-01-05 00:50:57.542131 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-01-05 00:50:57.542137 | orchestrator | Monday 05 January 2026 00:50:56 +0000 (0:00:01.063) 0:00:19.100 ******** 2026-01-05 00:50:57.542143 | orchestrator | [WARNING]: Skipped 2026-01-05 00:50:57.542157 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-01-05 00:51:08.118846 | orchestrator | to this access issue: 2026-01-05 00:51:08.118963 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-01-05 00:51:08.118990 | orchestrator | directory 2026-01-05 00:51:08.119000 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-05 00:51:08.119007 | orchestrator | 2026-01-05 00:51:08.119014 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-01-05 00:51:08.119072 | orchestrator | Monday 05 January 2026 00:50:57 +0000 (0:00:01.393) 0:00:20.493 ******** 2026-01-05 00:51:08.119080 | orchestrator | [WARNING]: Skipped 2026-01-05 00:51:08.119086 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-01-05 00:51:08.119092 | orchestrator | to this access issue: 2026-01-05 00:51:08.119099 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-01-05 00:51:08.119105 | orchestrator | directory 2026-01-05 00:51:08.119111 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-05 00:51:08.119117 | orchestrator | 2026-01-05 00:51:08.119123 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-01-05 00:51:08.119129 | orchestrator | Monday 05 January 2026 00:50:58 +0000 (0:00:00.878) 0:00:21.372 ******** 2026-01-05 00:51:08.119135 | orchestrator | [WARNING]: Skipped 2026-01-05 00:51:08.119141 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-01-05 00:51:08.119147 | orchestrator | to this access issue: 2026-01-05 00:51:08.119154 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-01-05 00:51:08.119160 | orchestrator | directory 2026-01-05 00:51:08.119167 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-05 00:51:08.119174 | orchestrator | 2026-01-05 00:51:08.119178 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-01-05 00:51:08.119187 | orchestrator | Monday 05 January 2026 00:50:59 +0000 (0:00:00.882) 0:00:22.254 ******** 2026-01-05 00:51:08.119192 | orchestrator | changed: [testbed-manager] 2026-01-05 00:51:08.119196 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:51:08.119200 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:51:08.119204 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:51:08.119208 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:51:08.119212 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:51:08.119216 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:51:08.119219 | orchestrator | 2026-01-05 00:51:08.119223 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-01-05 00:51:08.119227 | orchestrator | Monday 05 January 2026 00:51:02 +0000 (0:00:02.814) 0:00:25.069 ******** 2026-01-05 00:51:08.119231 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-05 00:51:08.119253 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-05 00:51:08.119258 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-05 00:51:08.119262 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-05 00:51:08.119265 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-05 00:51:08.119269 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-05 00:51:08.119273 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-05 00:51:08.119277 | orchestrator | 2026-01-05 00:51:08.119281 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-01-05 00:51:08.119284 | orchestrator | Monday 05 January 2026 00:51:04 +0000 (0:00:02.278) 0:00:27.348 ******** 2026-01-05 00:51:08.119288 | orchestrator | changed: [testbed-manager] 2026-01-05 00:51:08.119292 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:51:08.119296 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:51:08.119299 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:51:08.119303 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:51:08.119307 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:51:08.119311 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:51:08.119314 | orchestrator | 2026-01-05 00:51:08.119318 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-01-05 00:51:08.119322 | orchestrator | Monday 05 January 2026 00:51:06 +0000 (0:00:02.055) 0:00:29.403 ******** 2026-01-05 00:51:08.119328 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:51:08.119358 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:51:08.119363 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:51:08.119376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:51:08.119388 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:51:08.119396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:51:08.119402 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:51:08.119416 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:08.119424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:51:08.119437 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:51:14.571663 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:51:14.571779 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:14.571787 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:51:14.571792 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:51:14.571796 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:51:14.571800 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:51:14.571804 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:14.571826 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:14.571830 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:14.571839 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:14.571856 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:14.571861 | orchestrator | 2026-01-05 00:51:14.571865 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-01-05 00:51:14.571870 | orchestrator | Monday 05 January 2026 00:51:08 +0000 (0:00:01.720) 0:00:31.124 ******** 2026-01-05 00:51:14.571874 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-05 00:51:14.571879 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-05 00:51:14.571883 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-05 00:51:14.571887 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-05 00:51:14.571891 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-05 00:51:14.571894 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-05 00:51:14.571898 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-05 00:51:14.571902 | orchestrator | 2026-01-05 00:51:14.571906 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-01-05 00:51:14.571910 | orchestrator | Monday 05 January 2026 00:51:10 +0000 (0:00:02.153) 0:00:33.277 ******** 2026-01-05 00:51:14.571914 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-05 00:51:14.571918 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-05 00:51:14.571922 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-05 00:51:14.571926 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-05 00:51:14.571929 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-05 00:51:14.571933 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-05 00:51:14.571937 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-05 00:51:14.571941 | orchestrator | 2026-01-05 00:51:14.571944 | orchestrator | TASK [common : Check common containers] **************************************** 2026-01-05 00:51:14.571948 | orchestrator | Monday 05 January 2026 00:51:12 +0000 (0:00:01.792) 0:00:35.069 ******** 2026-01-05 00:51:14.571952 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:51:14.571965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:51:15.327486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:51:15.327590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:51:15.327600 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:51:15.327606 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:51:15.327612 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:15.327617 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 00:51:15.327637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:15.327657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:15.327666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:15.327672 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:15.327677 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:15.327683 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:15.327691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:15.327701 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:51:15.327712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:52:33.079427 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:52:33.079542 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:52:33.079553 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:52:33.079560 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 00:52:33.079566 | orchestrator | 2026-01-05 00:52:33.079575 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-01-05 00:52:33.079583 | orchestrator | Monday 05 January 2026 00:51:15 +0000 (0:00:02.869) 0:00:37.939 ******** 2026-01-05 00:52:33.079589 | orchestrator | changed: [testbed-manager] 2026-01-05 00:52:33.079597 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:52:33.079604 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:52:33.079611 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:52:33.079618 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:52:33.079624 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:52:33.079630 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:52:33.079636 | orchestrator | 2026-01-05 00:52:33.079643 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-01-05 00:52:33.079675 | orchestrator | Monday 05 January 2026 00:51:16 +0000 (0:00:01.631) 0:00:39.570 ******** 2026-01-05 00:52:33.079682 | orchestrator | changed: [testbed-manager] 2026-01-05 00:52:33.079687 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:52:33.079692 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:52:33.079700 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:52:33.079708 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:52:33.079714 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:52:33.079720 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:52:33.079726 | orchestrator | 2026-01-05 00:52:33.079733 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-05 00:52:33.079739 | orchestrator | Monday 05 January 2026 00:51:18 +0000 (0:00:01.128) 0:00:40.699 ******** 2026-01-05 00:52:33.079745 | orchestrator | 2026-01-05 00:52:33.079751 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-05 00:52:33.079759 | orchestrator | Monday 05 January 2026 00:51:18 +0000 (0:00:00.089) 0:00:40.788 ******** 2026-01-05 00:52:33.079768 | orchestrator | 2026-01-05 00:52:33.079776 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-05 00:52:33.079784 | orchestrator | Monday 05 January 2026 00:51:18 +0000 (0:00:00.074) 0:00:40.863 ******** 2026-01-05 00:52:33.079792 | orchestrator | 2026-01-05 00:52:33.079798 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-05 00:52:33.079804 | orchestrator | Monday 05 January 2026 00:51:18 +0000 (0:00:00.067) 0:00:40.930 ******** 2026-01-05 00:52:33.079810 | orchestrator | 2026-01-05 00:52:33.079816 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-05 00:52:33.079823 | orchestrator | Monday 05 January 2026 00:51:18 +0000 (0:00:00.242) 0:00:41.173 ******** 2026-01-05 00:52:33.079828 | orchestrator | 2026-01-05 00:52:33.079834 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-05 00:52:33.079841 | orchestrator | Monday 05 January 2026 00:51:18 +0000 (0:00:00.065) 0:00:41.238 ******** 2026-01-05 00:52:33.079847 | orchestrator | 2026-01-05 00:52:33.079853 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-05 00:52:33.079879 | orchestrator | Monday 05 January 2026 00:51:18 +0000 (0:00:00.074) 0:00:41.313 ******** 2026-01-05 00:52:33.079885 | orchestrator | 2026-01-05 00:52:33.079891 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-01-05 00:52:33.079897 | orchestrator | Monday 05 January 2026 00:51:18 +0000 (0:00:00.108) 0:00:41.421 ******** 2026-01-05 00:52:33.079902 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:52:33.079908 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:52:33.079915 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:52:33.079922 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:52:33.079929 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:52:33.079956 | orchestrator | changed: [testbed-manager] 2026-01-05 00:52:33.079964 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:52:33.079971 | orchestrator | 2026-01-05 00:52:33.079978 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-01-05 00:52:33.079985 | orchestrator | Monday 05 January 2026 00:51:54 +0000 (0:00:35.329) 0:01:16.751 ******** 2026-01-05 00:52:33.079991 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:52:33.079998 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:52:33.080023 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:52:33.080034 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:52:33.080040 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:52:33.080046 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:52:33.080053 | orchestrator | changed: [testbed-manager] 2026-01-05 00:52:33.080060 | orchestrator | 2026-01-05 00:52:33.080066 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-01-05 00:52:33.080072 | orchestrator | Monday 05 January 2026 00:52:22 +0000 (0:00:28.108) 0:01:44.859 ******** 2026-01-05 00:52:33.080078 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:52:33.080116 | orchestrator | ok: [testbed-manager] 2026-01-05 00:52:33.080122 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:52:33.080128 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:52:33.080133 | orchestrator | ok: [testbed-node-3] 2026-01-05 00:52:33.080139 | orchestrator | ok: [testbed-node-4] 2026-01-05 00:52:33.080145 | orchestrator | ok: [testbed-node-5] 2026-01-05 00:52:33.080150 | orchestrator | 2026-01-05 00:52:33.080157 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-01-05 00:52:33.080164 | orchestrator | Monday 05 January 2026 00:52:24 +0000 (0:00:02.338) 0:01:47.198 ******** 2026-01-05 00:52:33.080169 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:52:33.080176 | orchestrator | changed: [testbed-manager] 2026-01-05 00:52:33.080181 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:52:33.080187 | orchestrator | changed: [testbed-node-3] 2026-01-05 00:52:33.080192 | orchestrator | changed: [testbed-node-4] 2026-01-05 00:52:33.080197 | orchestrator | changed: [testbed-node-5] 2026-01-05 00:52:33.080203 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:52:33.080208 | orchestrator | 2026-01-05 00:52:33.080214 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:52:33.080221 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-05 00:52:33.080230 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-05 00:52:33.080235 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-05 00:52:33.080241 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-05 00:52:33.080246 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-05 00:52:33.080251 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-05 00:52:33.080257 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-05 00:52:33.080263 | orchestrator | 2026-01-05 00:52:33.080268 | orchestrator | 2026-01-05 00:52:33.080274 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:52:33.080279 | orchestrator | Monday 05 January 2026 00:52:33 +0000 (0:00:08.476) 0:01:55.674 ******** 2026-01-05 00:52:33.080285 | orchestrator | =============================================================================== 2026-01-05 00:52:33.080291 | orchestrator | common : Restart fluentd container ------------------------------------- 35.33s 2026-01-05 00:52:33.080297 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 28.11s 2026-01-05 00:52:33.080303 | orchestrator | common : Restart cron container ----------------------------------------- 8.48s 2026-01-05 00:52:33.080314 | orchestrator | common : Copying over config.json files for services -------------------- 3.66s 2026-01-05 00:52:33.080321 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 3.50s 2026-01-05 00:52:33.080326 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.09s 2026-01-05 00:52:33.080332 | orchestrator | common : Check common containers ---------------------------------------- 2.87s 2026-01-05 00:52:33.080338 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 2.81s 2026-01-05 00:52:33.080344 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.34s 2026-01-05 00:52:33.080349 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.28s 2026-01-05 00:52:33.080356 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.15s 2026-01-05 00:52:33.080370 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.06s 2026-01-05 00:52:33.080376 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 1.91s 2026-01-05 00:52:33.080383 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 1.79s 2026-01-05 00:52:33.080389 | orchestrator | common : Ensuring config directories have correct owner and permission --- 1.72s 2026-01-05 00:52:33.080395 | orchestrator | common : Creating log volume -------------------------------------------- 1.63s 2026-01-05 00:52:33.080410 | orchestrator | common : include_tasks -------------------------------------------------- 1.56s 2026-01-05 00:52:33.370738 | orchestrator | common : include_tasks -------------------------------------------------- 1.49s 2026-01-05 00:52:33.370843 | orchestrator | common : Find custom fluentd filter config files ------------------------ 1.39s 2026-01-05 00:52:33.370853 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.13s 2026-01-05 00:52:35.496559 | orchestrator | 2026-01-05 00:52:35 | INFO  | Task d2755a64-26a2-4f68-85b8-8e1a6fc0c90f (loadbalancer) was prepared for execution. 2026-01-05 00:52:35.496654 | orchestrator | 2026-01-05 00:52:35 | INFO  | It takes a moment until task d2755a64-26a2-4f68-85b8-8e1a6fc0c90f (loadbalancer) has been started and output is visible here. 2026-01-05 00:52:50.946244 | orchestrator | 2026-01-05 00:52:50.946335 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 00:52:50.946352 | orchestrator | 2026-01-05 00:52:50.946364 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 00:52:50.946376 | orchestrator | Monday 05 January 2026 00:52:39 +0000 (0:00:00.269) 0:00:00.269 ******** 2026-01-05 00:52:50.946386 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:52:50.946398 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:52:50.946409 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:52:50.946420 | orchestrator | 2026-01-05 00:52:50.946431 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 00:52:50.946442 | orchestrator | Monday 05 January 2026 00:52:39 +0000 (0:00:00.334) 0:00:00.604 ******** 2026-01-05 00:52:50.946453 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-01-05 00:52:50.946464 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-01-05 00:52:50.946475 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-01-05 00:52:50.946486 | orchestrator | 2026-01-05 00:52:50.946497 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-01-05 00:52:50.946508 | orchestrator | 2026-01-05 00:52:50.946519 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-01-05 00:52:50.946529 | orchestrator | Monday 05 January 2026 00:52:40 +0000 (0:00:00.445) 0:00:01.050 ******** 2026-01-05 00:52:50.946541 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:52:50.946552 | orchestrator | 2026-01-05 00:52:50.946562 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-01-05 00:52:50.946574 | orchestrator | Monday 05 January 2026 00:52:40 +0000 (0:00:00.602) 0:00:01.652 ******** 2026-01-05 00:52:50.946585 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:52:50.946595 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:52:50.946606 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:52:50.946617 | orchestrator | 2026-01-05 00:52:50.946628 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-01-05 00:52:50.946639 | orchestrator | Monday 05 January 2026 00:52:41 +0000 (0:00:00.666) 0:00:02.318 ******** 2026-01-05 00:52:50.946650 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:52:50.946661 | orchestrator | 2026-01-05 00:52:50.946672 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-01-05 00:52:50.946683 | orchestrator | Monday 05 January 2026 00:52:42 +0000 (0:00:00.716) 0:00:03.035 ******** 2026-01-05 00:52:50.946712 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:52:50.946724 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:52:50.946734 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:52:50.946745 | orchestrator | 2026-01-05 00:52:50.946756 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-01-05 00:52:50.946767 | orchestrator | Monday 05 January 2026 00:52:43 +0000 (0:00:00.772) 0:00:03.808 ******** 2026-01-05 00:52:50.946778 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-05 00:52:50.946789 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-05 00:52:50.946800 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-05 00:52:50.946811 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-05 00:52:50.946839 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-05 00:52:50.946851 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-05 00:52:50.946865 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-05 00:52:50.946876 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-05 00:52:50.946886 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-05 00:52:50.946897 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-05 00:52:50.946909 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-05 00:52:50.946919 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-05 00:52:50.946930 | orchestrator | 2026-01-05 00:52:50.946941 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-01-05 00:52:50.946953 | orchestrator | Monday 05 January 2026 00:52:46 +0000 (0:00:03.310) 0:00:07.118 ******** 2026-01-05 00:52:50.946964 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-01-05 00:52:50.946975 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-01-05 00:52:50.946986 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-01-05 00:52:50.946997 | orchestrator | 2026-01-05 00:52:50.947008 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-01-05 00:52:50.947019 | orchestrator | Monday 05 January 2026 00:52:47 +0000 (0:00:00.750) 0:00:07.868 ******** 2026-01-05 00:52:50.947030 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-01-05 00:52:50.947041 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-01-05 00:52:50.947052 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-01-05 00:52:50.947063 | orchestrator | 2026-01-05 00:52:50.947085 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-01-05 00:52:50.947096 | orchestrator | Monday 05 January 2026 00:52:48 +0000 (0:00:01.461) 0:00:09.330 ******** 2026-01-05 00:52:50.947107 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-01-05 00:52:50.947118 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:52:50.947144 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-01-05 00:52:50.947155 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:52:50.947166 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-01-05 00:52:50.947176 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:52:50.947188 | orchestrator | 2026-01-05 00:52:50.947198 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-01-05 00:52:50.947209 | orchestrator | Monday 05 January 2026 00:52:49 +0000 (0:00:00.521) 0:00:09.851 ******** 2026-01-05 00:52:50.947222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-05 00:52:50.947248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-05 00:52:50.947260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-05 00:52:50.947272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 00:52:50.947284 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 00:52:50.947306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 00:52:55.878350 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-05 00:52:55.878487 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-05 00:52:55.878515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-05 00:52:55.878530 | orchestrator | 2026-01-05 00:52:55.878545 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-01-05 00:52:55.878566 | orchestrator | Monday 05 January 2026 00:52:50 +0000 (0:00:01.741) 0:00:11.593 ******** 2026-01-05 00:52:55.878586 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:52:55.878606 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:52:55.878625 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:52:55.878644 | orchestrator | 2026-01-05 00:52:55.878663 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-01-05 00:52:55.878675 | orchestrator | Monday 05 January 2026 00:52:51 +0000 (0:00:00.811) 0:00:12.405 ******** 2026-01-05 00:52:55.878687 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-01-05 00:52:55.878698 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-01-05 00:52:55.878708 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-01-05 00:52:55.878719 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-01-05 00:52:55.878730 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-01-05 00:52:55.878740 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-01-05 00:52:55.878751 | orchestrator | 2026-01-05 00:52:55.878762 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-01-05 00:52:55.878773 | orchestrator | Monday 05 January 2026 00:52:53 +0000 (0:00:01.424) 0:00:13.829 ******** 2026-01-05 00:52:55.878783 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:52:55.878797 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:52:55.878852 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:52:55.878875 | orchestrator | 2026-01-05 00:52:55.878894 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-01-05 00:52:55.878913 | orchestrator | Monday 05 January 2026 00:52:54 +0000 (0:00:00.929) 0:00:14.758 ******** 2026-01-05 00:52:55.878932 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:52:55.878951 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:52:55.878968 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:52:55.878986 | orchestrator | 2026-01-05 00:52:55.879004 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-01-05 00:52:55.879023 | orchestrator | Monday 05 January 2026 00:52:55 +0000 (0:00:01.230) 0:00:15.989 ******** 2026-01-05 00:52:55.879043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-05 00:52:55.879105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:52:55.879129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:52:55.879200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d7fc57a34c85adddc58ae34b3f1d24f686f094b4', '__omit_place_holder__d7fc57a34c85adddc58ae34b3f1d24f686f094b4'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-05 00:52:55.879226 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:52:55.879247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-05 00:52:55.879267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:52:55.879288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:52:55.879322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d7fc57a34c85adddc58ae34b3f1d24f686f094b4', '__omit_place_holder__d7fc57a34c85adddc58ae34b3f1d24f686f094b4'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-05 00:52:55.879337 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:52:55.879360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-05 00:52:58.711931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:52:58.712048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:52:58.712061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d7fc57a34c85adddc58ae34b3f1d24f686f094b4', '__omit_place_holder__d7fc57a34c85adddc58ae34b3f1d24f686f094b4'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-05 00:52:58.712070 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:52:58.712080 | orchestrator | 2026-01-05 00:52:58.712090 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-01-05 00:52:58.712100 | orchestrator | Monday 05 January 2026 00:52:55 +0000 (0:00:00.539) 0:00:16.528 ******** 2026-01-05 00:52:58.712109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-05 00:52:58.712153 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-05 00:52:58.712163 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-05 00:52:58.712193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 00:52:58.712201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:52:58.712210 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 00:52:58.712219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d7fc57a34c85adddc58ae34b3f1d24f686f094b4', '__omit_place_holder__d7fc57a34c85adddc58ae34b3f1d24f686f094b4'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-05 00:52:58.712234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:52:58.712247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d7fc57a34c85adddc58ae34b3f1d24f686f094b4', '__omit_place_holder__d7fc57a34c85adddc58ae34b3f1d24f686f094b4'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-05 00:52:58.712270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 00:53:07.832990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:53:07.833101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d7fc57a34c85adddc58ae34b3f1d24f686f094b4', '__omit_place_holder__d7fc57a34c85adddc58ae34b3f1d24f686f094b4'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-05 00:53:07.833113 | orchestrator | 2026-01-05 00:53:07.833121 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-01-05 00:53:07.833130 | orchestrator | Monday 05 January 2026 00:52:58 +0000 (0:00:02.832) 0:00:19.361 ******** 2026-01-05 00:53:07.833138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-05 00:53:07.833170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-05 00:53:07.833191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-05 00:53:07.833198 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 00:53:07.833221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 00:53:07.833227 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 00:53:07.833234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-05 00:53:07.833247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-05 00:53:07.833254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-05 00:53:07.833261 | orchestrator | 2026-01-05 00:53:07.833268 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-01-05 00:53:07.833275 | orchestrator | Monday 05 January 2026 00:53:02 +0000 (0:00:03.395) 0:00:22.756 ******** 2026-01-05 00:53:07.833282 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-05 00:53:07.833292 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-05 00:53:07.833298 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-05 00:53:07.833304 | orchestrator | 2026-01-05 00:53:07.833311 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-01-05 00:53:07.833317 | orchestrator | Monday 05 January 2026 00:53:04 +0000 (0:00:02.325) 0:00:25.082 ******** 2026-01-05 00:53:07.833323 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-05 00:53:07.833330 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-05 00:53:07.833335 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-05 00:53:07.833342 | orchestrator | 2026-01-05 00:53:07.833348 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-01-05 00:53:07.833355 | orchestrator | Monday 05 January 2026 00:53:07 +0000 (0:00:02.854) 0:00:27.937 ******** 2026-01-05 00:53:07.833362 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:53:07.833369 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:53:07.833376 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:53:07.833382 | orchestrator | 2026-01-05 00:53:07.833400 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-01-05 00:53:20.103810 | orchestrator | Monday 05 January 2026 00:53:07 +0000 (0:00:00.542) 0:00:28.479 ******** 2026-01-05 00:53:20.103898 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-05 00:53:20.103915 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-05 00:53:20.103921 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-05 00:53:20.103926 | orchestrator | 2026-01-05 00:53:20.103932 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-01-05 00:53:20.103953 | orchestrator | Monday 05 January 2026 00:53:09 +0000 (0:00:02.134) 0:00:30.614 ******** 2026-01-05 00:53:20.103958 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-05 00:53:20.103964 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-05 00:53:20.103969 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-05 00:53:20.103973 | orchestrator | 2026-01-05 00:53:20.103978 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-01-05 00:53:20.103983 | orchestrator | Monday 05 January 2026 00:53:12 +0000 (0:00:02.142) 0:00:32.757 ******** 2026-01-05 00:53:20.103988 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-01-05 00:53:20.103993 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-01-05 00:53:20.103998 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-01-05 00:53:20.104002 | orchestrator | 2026-01-05 00:53:20.104007 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-01-05 00:53:20.104011 | orchestrator | Monday 05 January 2026 00:53:13 +0000 (0:00:01.633) 0:00:34.390 ******** 2026-01-05 00:53:20.104017 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-01-05 00:53:20.104021 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-01-05 00:53:20.104026 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-01-05 00:53:20.104031 | orchestrator | 2026-01-05 00:53:20.104035 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-01-05 00:53:20.104040 | orchestrator | Monday 05 January 2026 00:53:15 +0000 (0:00:01.616) 0:00:36.007 ******** 2026-01-05 00:53:20.104044 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:53:20.104049 | orchestrator | 2026-01-05 00:53:20.104054 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-01-05 00:53:20.104058 | orchestrator | Monday 05 January 2026 00:53:15 +0000 (0:00:00.562) 0:00:36.569 ******** 2026-01-05 00:53:20.104065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-05 00:53:20.104079 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-05 00:53:20.104084 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-05 00:53:20.104108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 00:53:20.104114 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 00:53:20.104119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 00:53:20.104124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-05 00:53:20.104134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-05 00:53:20.104139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-05 00:53:20.104144 | orchestrator | 2026-01-05 00:53:20.104148 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-01-05 00:53:20.104157 | orchestrator | Monday 05 January 2026 00:53:19 +0000 (0:00:03.590) 0:00:40.160 ******** 2026-01-05 00:53:20.104166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-05 00:53:20.899584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:53:20.899672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:53:20.899686 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:53:20.899698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-05 00:53:20.899708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:53:20.899731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:53:20.899741 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:53:20.899810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-05 00:53:20.899845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:53:20.899855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:53:20.899864 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:53:20.899873 | orchestrator | 2026-01-05 00:53:20.899883 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-01-05 00:53:20.899893 | orchestrator | Monday 05 January 2026 00:53:20 +0000 (0:00:00.593) 0:00:40.753 ******** 2026-01-05 00:53:20.899902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-05 00:53:20.899911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:53:20.899933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:53:20.899968 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:53:20.899988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-05 00:53:20.900021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:53:21.755664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:53:21.755848 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:53:21.755868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-05 00:53:21.755882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:53:21.755891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:53:21.755900 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:53:21.755932 | orchestrator | 2026-01-05 00:53:21.755966 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-01-05 00:53:21.755977 | orchestrator | Monday 05 January 2026 00:53:20 +0000 (0:00:00.794) 0:00:41.548 ******** 2026-01-05 00:53:21.755986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-05 00:53:21.755996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:53:21.756023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:53:21.756032 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:53:21.756042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-05 00:53:21.756067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:53:21.756077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:53:21.756095 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:53:21.756109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-05 00:53:21.756118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:53:21.756127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:53:21.756142 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:53:23.229491 | orchestrator | 2026-01-05 00:53:23.229600 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-01-05 00:53:23.229615 | orchestrator | Monday 05 January 2026 00:53:21 +0000 (0:00:00.852) 0:00:42.400 ******** 2026-01-05 00:53:23.229629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-05 00:53:23.229644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:53:23.229655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:53:23.229701 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:53:23.229729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-05 00:53:23.229741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:53:23.229838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:53:23.229858 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:53:23.229900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-05 00:53:23.229912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:53:23.229923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:53:23.229947 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:53:23.229957 | orchestrator | 2026-01-05 00:53:23.229967 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-01-05 00:53:23.229977 | orchestrator | Monday 05 January 2026 00:53:22 +0000 (0:00:00.657) 0:00:43.058 ******** 2026-01-05 00:53:23.229988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-05 00:53:23.230005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:53:23.230098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:53:23.230119 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:53:23.230149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-05 00:53:24.219738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:53:24.219865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:53:24.219892 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:53:24.219899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-05 00:53:24.219915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:53:24.219919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:53:24.219923 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:53:24.219928 | orchestrator | 2026-01-05 00:53:24.219933 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-01-05 00:53:24.219938 | orchestrator | Monday 05 January 2026 00:53:23 +0000 (0:00:00.821) 0:00:43.880 ******** 2026-01-05 00:53:24.219942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-05 00:53:24.219961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:53:24.219966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:53:24.219974 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:53:24.219978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-05 00:53:24.219986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:53:24.219990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:53:24.219994 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:53:24.219998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-05 00:53:24.220004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:53:25.679564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:53:25.679691 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:53:25.679700 | orchestrator | 2026-01-05 00:53:25.679707 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-01-05 00:53:25.679712 | orchestrator | Monday 05 January 2026 00:53:24 +0000 (0:00:00.985) 0:00:44.865 ******** 2026-01-05 00:53:25.679718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-05 00:53:25.679724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:53:25.679729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:53:25.679733 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:53:25.679737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-05 00:53:25.679781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:53:25.679802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:53:25.679813 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:53:25.679821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-05 00:53:25.679827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:53:25.679853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:53:25.679860 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:53:25.679867 | orchestrator | 2026-01-05 00:53:25.679874 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-01-05 00:53:25.679880 | orchestrator | Monday 05 January 2026 00:53:24 +0000 (0:00:00.606) 0:00:45.471 ******** 2026-01-05 00:53:25.679887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-05 00:53:25.679892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:53:25.679902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:53:32.653340 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:53:32.653463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-05 00:53:32.653482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:53:32.653494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:53:32.653503 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:53:32.653527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-05 00:53:32.653536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 00:53:32.653543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 00:53:32.653650 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:53:32.653662 | orchestrator | 2026-01-05 00:53:32.653671 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-01-05 00:53:32.653681 | orchestrator | Monday 05 January 2026 00:53:25 +0000 (0:00:00.859) 0:00:46.330 ******** 2026-01-05 00:53:32.653688 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-05 00:53:32.653719 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-05 00:53:32.653813 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-05 00:53:32.653823 | orchestrator | 2026-01-05 00:53:32.653831 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-01-05 00:53:32.653839 | orchestrator | Monday 05 January 2026 00:53:27 +0000 (0:00:01.657) 0:00:47.987 ******** 2026-01-05 00:53:32.653847 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-05 00:53:32.653855 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-05 00:53:32.653863 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-05 00:53:32.653870 | orchestrator | 2026-01-05 00:53:32.653878 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-01-05 00:53:32.653886 | orchestrator | Monday 05 January 2026 00:53:29 +0000 (0:00:01.756) 0:00:49.744 ******** 2026-01-05 00:53:32.653894 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-05 00:53:32.653903 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-05 00:53:32.653912 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-05 00:53:32.653920 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-05 00:53:32.653929 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:53:32.653938 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-05 00:53:32.653947 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:53:32.653956 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-05 00:53:32.653966 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:53:32.653974 | orchestrator | 2026-01-05 00:53:32.653984 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-01-05 00:53:32.653995 | orchestrator | Monday 05 January 2026 00:53:29 +0000 (0:00:00.836) 0:00:50.580 ******** 2026-01-05 00:53:32.654013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-05 00:53:32.654087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-05 00:53:32.654110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-05 00:53:32.654133 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 00:53:36.998123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 00:53:36.998209 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 00:53:36.998231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-05 00:53:36.998237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-05 00:53:36.998257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-05 00:53:36.998265 | orchestrator | 2026-01-05 00:53:36.998273 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-01-05 00:53:36.998285 | orchestrator | Monday 05 January 2026 00:53:32 +0000 (0:00:02.724) 0:00:53.305 ******** 2026-01-05 00:53:36.998292 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:53:36.998298 | orchestrator | 2026-01-05 00:53:36.998304 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-01-05 00:53:36.998311 | orchestrator | Monday 05 January 2026 00:53:33 +0000 (0:00:00.830) 0:00:54.135 ******** 2026-01-05 00:53:36.998336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-05 00:53:36.998346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-05 00:53:36.998353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-05 00:53:36.998364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-05 00:53:36.998377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-05 00:53:36.998384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-05 00:53:36.998391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-05 00:53:36.998402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-05 00:53:37.748102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-05 00:53:37.748205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-05 00:53:37.748233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-05 00:53:37.748240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-05 00:53:37.748248 | orchestrator | 2026-01-05 00:53:37.748256 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-01-05 00:53:37.748263 | orchestrator | Monday 05 January 2026 00:53:36 +0000 (0:00:03.510) 0:00:57.645 ******** 2026-01-05 00:53:37.748271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-05 00:53:37.748293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-05 00:53:37.748300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-05 00:53:37.748307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-05 00:53:37.748319 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:53:37.748327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-05 00:53:37.748334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-05 00:53:37.748341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-05 00:53:37.748347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-05 00:53:37.748354 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:53:37.748366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-05 00:53:46.093159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-05 00:53:46.093257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-05 00:53:46.093267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-05 00:53:46.093274 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:53:46.093283 | orchestrator | 2026-01-05 00:53:46.093292 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-01-05 00:53:46.093300 | orchestrator | Monday 05 January 2026 00:53:37 +0000 (0:00:00.750) 0:00:58.396 ******** 2026-01-05 00:53:46.093308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-01-05 00:53:46.093316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-01-05 00:53:46.093325 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:53:46.093331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-01-05 00:53:46.093338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-01-05 00:53:46.093344 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:53:46.093350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-01-05 00:53:46.093357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-01-05 00:53:46.093363 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:53:46.093369 | orchestrator | 2026-01-05 00:53:46.093378 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-01-05 00:53:46.093389 | orchestrator | Monday 05 January 2026 00:53:39 +0000 (0:00:01.329) 0:00:59.725 ******** 2026-01-05 00:53:46.093405 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:53:46.093439 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:53:46.093449 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:53:46.093460 | orchestrator | 2026-01-05 00:53:46.093470 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-01-05 00:53:46.093481 | orchestrator | Monday 05 January 2026 00:53:40 +0000 (0:00:01.315) 0:01:01.040 ******** 2026-01-05 00:53:46.093491 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:53:46.093500 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:53:46.093511 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:53:46.093520 | orchestrator | 2026-01-05 00:53:46.093530 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-01-05 00:53:46.093540 | orchestrator | Monday 05 January 2026 00:53:42 +0000 (0:00:01.967) 0:01:03.008 ******** 2026-01-05 00:53:46.093551 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:53:46.093561 | orchestrator | 2026-01-05 00:53:46.093592 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-01-05 00:53:46.093602 | orchestrator | Monday 05 January 2026 00:53:42 +0000 (0:00:00.598) 0:01:03.607 ******** 2026-01-05 00:53:46.093624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-05 00:53:46.093639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-05 00:53:46.093652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:53:46.093663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-05 00:53:46.093679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-05 00:53:46.093725 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-05 00:53:46.687525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:53:46.687620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-05 00:53:46.687632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:53:46.687640 | orchestrator | 2026-01-05 00:53:46.687648 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-01-05 00:53:46.687676 | orchestrator | Monday 05 January 2026 00:53:46 +0000 (0:00:03.133) 0:01:06.740 ******** 2026-01-05 00:53:46.687684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-05 00:53:46.687738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-05 00:53:46.687774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:53:46.687782 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:53:46.687790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-05 00:53:46.687797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-05 00:53:46.687810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:53:46.687817 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:53:46.687823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-05 00:53:46.687834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-05 00:53:46.687846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:53:56.801119 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:53:56.801269 | orchestrator | 2026-01-05 00:53:56.801297 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-01-05 00:53:56.801311 | orchestrator | Monday 05 January 2026 00:53:46 +0000 (0:00:00.591) 0:01:07.332 ******** 2026-01-05 00:53:56.801342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-05 00:53:56.801358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-05 00:53:56.801383 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:53:56.801394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-05 00:53:56.801431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-05 00:53:56.801443 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:53:56.801454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-05 00:53:56.801466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-05 00:53:56.801476 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:53:56.801487 | orchestrator | 2026-01-05 00:53:56.801498 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-01-05 00:53:56.801509 | orchestrator | Monday 05 January 2026 00:53:47 +0000 (0:00:00.908) 0:01:08.241 ******** 2026-01-05 00:53:56.801520 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:53:56.801532 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:53:56.801551 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:53:56.801568 | orchestrator | 2026-01-05 00:53:56.801607 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-01-05 00:53:56.801625 | orchestrator | Monday 05 January 2026 00:53:49 +0000 (0:00:01.654) 0:01:09.896 ******** 2026-01-05 00:53:56.801642 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:53:56.801658 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:53:56.801792 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:53:56.801820 | orchestrator | 2026-01-05 00:53:56.801840 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-01-05 00:53:56.801859 | orchestrator | Monday 05 January 2026 00:53:51 +0000 (0:00:02.228) 0:01:12.125 ******** 2026-01-05 00:53:56.801872 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:53:56.801883 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:53:56.801894 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:53:56.801905 | orchestrator | 2026-01-05 00:53:56.801916 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-01-05 00:53:56.801927 | orchestrator | Monday 05 January 2026 00:53:51 +0000 (0:00:00.322) 0:01:12.447 ******** 2026-01-05 00:53:56.801938 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:53:56.801948 | orchestrator | 2026-01-05 00:53:56.801959 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-01-05 00:53:56.801970 | orchestrator | Monday 05 January 2026 00:53:52 +0000 (0:00:00.690) 0:01:13.138 ******** 2026-01-05 00:53:56.802003 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-01-05 00:53:56.802129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-01-05 00:53:56.802173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-01-05 00:53:56.802193 | orchestrator | 2026-01-05 00:53:56.802209 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-01-05 00:53:56.802226 | orchestrator | Monday 05 January 2026 00:53:55 +0000 (0:00:02.914) 0:01:16.053 ******** 2026-01-05 00:53:56.802242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-01-05 00:53:56.802260 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:53:56.802277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-01-05 00:53:56.802295 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:53:56.802346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-01-05 00:53:56.802392 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:53:56.802404 | orchestrator | 2026-01-05 00:53:56.802431 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-01-05 00:54:04.789767 | orchestrator | Monday 05 January 2026 00:53:56 +0000 (0:00:01.394) 0:01:17.447 ******** 2026-01-05 00:54:04.789926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-01-05 00:54:04.789952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-01-05 00:54:04.789972 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:04.789985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-01-05 00:54:04.789997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-01-05 00:54:04.790009 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:04.790076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-01-05 00:54:04.790089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-01-05 00:54:04.790101 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:04.790112 | orchestrator | 2026-01-05 00:54:04.790124 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-01-05 00:54:04.790136 | orchestrator | Monday 05 January 2026 00:53:58 +0000 (0:00:01.774) 0:01:19.222 ******** 2026-01-05 00:54:04.790148 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:04.790159 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:04.790170 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:04.790204 | orchestrator | 2026-01-05 00:54:04.790217 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-01-05 00:54:04.790232 | orchestrator | Monday 05 January 2026 00:53:59 +0000 (0:00:00.503) 0:01:19.725 ******** 2026-01-05 00:54:04.790245 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:04.790258 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:04.790271 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:04.790284 | orchestrator | 2026-01-05 00:54:04.790297 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-01-05 00:54:04.790311 | orchestrator | Monday 05 January 2026 00:54:00 +0000 (0:00:01.321) 0:01:21.046 ******** 2026-01-05 00:54:04.790329 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:54:04.790347 | orchestrator | 2026-01-05 00:54:04.790377 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-01-05 00:54:04.790397 | orchestrator | Monday 05 January 2026 00:54:01 +0000 (0:00:01.041) 0:01:22.088 ******** 2026-01-05 00:54:04.790445 | orchestrator | [0;33mchanged: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-05 00:54:04.790469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:54:04.790492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-05 00:54:04.790533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-05 00:54:04.790575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-05 00:54:04.790609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-05 00:54:05.425867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:54:05.425961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:54:05.425972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-05 00:54:05.425980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-05 00:54:05.426059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-05 00:54:05.426083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-05 00:54:05.426090 | orchestrator | 2026-01-05 00:54:05.426099 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-01-05 00:54:05.426107 | orchestrator | Monday 05 January 2026 00:54:04 +0000 (0:00:03.445) 0:01:25.533 ******** 2026-01-05 00:54:05.426114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-05 00:54:05.426122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:54:05.426128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-05 00:54:05.426145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-05 00:54:05.426152 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:05.426160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-05 00:54:05.426171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:54:14.899590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-05 00:54:14.899701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-05 00:54:14.899720 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:14.899734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-05 00:54:14.899739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:54:14.899744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-05 00:54:14.899759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-05 00:54:14.899763 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:14.899767 | orchestrator | 2026-01-05 00:54:14.899772 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-01-05 00:54:14.899776 | orchestrator | Monday 05 January 2026 00:54:05 +0000 (0:00:00.639) 0:01:26.173 ******** 2026-01-05 00:54:14.899781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-05 00:54:14.899787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-05 00:54:14.899795 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:14.899799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-05 00:54:14.899803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-05 00:54:14.899812 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:14.899816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-05 00:54:14.899820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-05 00:54:14.899824 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:14.899827 | orchestrator | 2026-01-05 00:54:14.899832 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-01-05 00:54:14.899835 | orchestrator | Monday 05 January 2026 00:54:06 +0000 (0:00:01.118) 0:01:27.292 ******** 2026-01-05 00:54:14.899839 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:14.899843 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:14.899847 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:14.899851 | orchestrator | 2026-01-05 00:54:14.899857 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-01-05 00:54:14.899861 | orchestrator | Monday 05 January 2026 00:54:07 +0000 (0:00:01.367) 0:01:28.659 ******** 2026-01-05 00:54:14.899865 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:14.899868 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:14.899872 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:14.899876 | orchestrator | 2026-01-05 00:54:14.899880 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-01-05 00:54:14.899883 | orchestrator | Monday 05 January 2026 00:54:09 +0000 (0:00:01.985) 0:01:30.644 ******** 2026-01-05 00:54:14.899887 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:14.899891 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:14.899895 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:14.899898 | orchestrator | 2026-01-05 00:54:14.899902 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-01-05 00:54:14.899906 | orchestrator | Monday 05 January 2026 00:54:10 +0000 (0:00:00.272) 0:01:30.917 ******** 2026-01-05 00:54:14.899910 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:14.899913 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:14.899917 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:14.899933 | orchestrator | 2026-01-05 00:54:14.899937 | orchestrator | TASK [include_role : designate] ************************************************ 2026-01-05 00:54:14.899941 | orchestrator | Monday 05 January 2026 00:54:10 +0000 (0:00:00.288) 0:01:31.205 ******** 2026-01-05 00:54:14.899945 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:54:14.899949 | orchestrator | 2026-01-05 00:54:14.899952 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-01-05 00:54:14.899956 | orchestrator | Monday 05 January 2026 00:54:11 +0000 (0:00:01.003) 0:01:32.208 ******** 2026-01-05 00:54:14.899964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-05 00:54:15.157979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-05 00:54:15.158085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-05 00:54:15.158100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-05 00:54:15.158117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-05 00:54:15.158125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:54:15.158132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-05 00:54:15.158173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-05 00:54:15.158182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-05 00:54:15.158189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-05 00:54:15.158198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-05 00:54:15.158205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-05 00:54:15.158218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-05 00:54:15.158232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-05 00:54:15.739843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-05 00:54:15.739897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:54:15.739910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-05 00:54:15.739914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-05 00:54:15.739918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-05 00:54:15.739930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:54:15.739941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-05 00:54:15.739945 | orchestrator | 2026-01-05 00:54:15.739950 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-01-05 00:54:15.739954 | orchestrator | Monday 05 January 2026 00:54:15 +0000 (0:00:03.599) 0:01:35.808 ******** 2026-01-05 00:54:15.739958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-05 00:54:15.739962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-05 00:54:15.739966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-05 00:54:15.739973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-05 00:54:15.739976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-05 00:54:15.740228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-05 00:54:16.210799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:54:16.210869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-05 00:54:16.210878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-05 00:54:16.210899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-05 00:54:16.210903 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:16.210908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-05 00:54:16.210923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-05 00:54:16.210934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:54:16.210938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-05 00:54:16.210941 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:16.210945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-05 00:54:16.210951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-05 00:54:16.210954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-05 00:54:16.210960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-05 00:54:16.210966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-05 00:54:26.118566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:54:26.118722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-05 00:54:26.118749 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:26.118756 | orchestrator | 2026-01-05 00:54:26.118762 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-01-05 00:54:26.118794 | orchestrator | Monday 05 January 2026 00:54:16 +0000 (0:00:01.055) 0:01:36.863 ******** 2026-01-05 00:54:26.118800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-01-05 00:54:26.118807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-01-05 00:54:26.118812 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:26.118816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-01-05 00:54:26.118820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-01-05 00:54:26.118824 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:26.118828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-01-05 00:54:26.118832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-01-05 00:54:26.118836 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:26.118840 | orchestrator | 2026-01-05 00:54:26.118844 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-01-05 00:54:26.118848 | orchestrator | Monday 05 January 2026 00:54:17 +0000 (0:00:01.118) 0:01:37.982 ******** 2026-01-05 00:54:26.118852 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:26.118856 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:26.118859 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:26.118863 | orchestrator | 2026-01-05 00:54:26.118867 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-01-05 00:54:26.118882 | orchestrator | Monday 05 January 2026 00:54:18 +0000 (0:00:01.169) 0:01:39.152 ******** 2026-01-05 00:54:26.118886 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:26.118890 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:26.118893 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:26.118897 | orchestrator | 2026-01-05 00:54:26.118901 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-01-05 00:54:26.118905 | orchestrator | Monday 05 January 2026 00:54:20 +0000 (0:00:02.071) 0:01:41.223 ******** 2026-01-05 00:54:26.118909 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:26.118912 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:26.118916 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:26.118920 | orchestrator | 2026-01-05 00:54:26.118924 | orchestrator | TASK [include_role : glance] *************************************************** 2026-01-05 00:54:26.118927 | orchestrator | Monday 05 January 2026 00:54:20 +0000 (0:00:00.338) 0:01:41.562 ******** 2026-01-05 00:54:26.118931 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:54:26.118935 | orchestrator | 2026-01-05 00:54:26.118939 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-01-05 00:54:26.118943 | orchestrator | Monday 05 January 2026 00:54:21 +0000 (0:00:01.002) 0:01:42.564 ******** 2026-01-05 00:54:26.118964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-05 00:54:26.118979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-05 00:54:26.118987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-05 00:54:29.378364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-05 00:54:29.378475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-05 00:54:29.378525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-05 00:54:29.378536 | orchestrator | 2026-01-05 00:54:29.378546 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-01-05 00:54:29.378554 | orchestrator | Monday 05 January 2026 00:54:26 +0000 (0:00:04.334) 0:01:46.898 ******** 2026-01-05 00:54:29.378567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-05 00:54:29.378590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-05 00:54:33.011729 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:33.011929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-05 00:54:33.011981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-05 00:54:33.011996 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:33.012038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-05 00:54:33.012053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-05 00:54:33.012075 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:33.012087 | orchestrator | 2026-01-05 00:54:33.012100 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-01-05 00:54:33.012113 | orchestrator | Monday 05 January 2026 00:54:29 +0000 (0:00:03.247) 0:01:50.146 ******** 2026-01-05 00:54:33.012125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-05 00:54:33.012148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-05 00:54:41.644455 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:41.644639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-05 00:54:41.644686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-05 00:54:41.644724 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:41.644737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-05 00:54:41.644750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-05 00:54:41.644761 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:41.644774 | orchestrator | 2026-01-05 00:54:41.644787 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-01-05 00:54:41.644800 | orchestrator | Monday 05 January 2026 00:54:33 +0000 (0:00:03.516) 0:01:53.662 ******** 2026-01-05 00:54:41.644811 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:41.644821 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:41.644833 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:41.644845 | orchestrator | 2026-01-05 00:54:41.644857 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-01-05 00:54:41.644869 | orchestrator | Monday 05 January 2026 00:54:34 +0000 (0:00:01.348) 0:01:55.010 ******** 2026-01-05 00:54:41.644880 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:41.644892 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:41.644904 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:41.644916 | orchestrator | 2026-01-05 00:54:41.644928 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-01-05 00:54:41.644942 | orchestrator | Monday 05 January 2026 00:54:36 +0000 (0:00:02.179) 0:01:57.190 ******** 2026-01-05 00:54:41.644952 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:41.644960 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:41.644967 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:41.644975 | orchestrator | 2026-01-05 00:54:41.644983 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-01-05 00:54:41.644992 | orchestrator | Monday 05 January 2026 00:54:36 +0000 (0:00:00.353) 0:01:57.543 ******** 2026-01-05 00:54:41.645001 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:54:41.645010 | orchestrator | 2026-01-05 00:54:41.645019 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-01-05 00:54:41.645027 | orchestrator | Monday 05 January 2026 00:54:37 +0000 (0:00:01.071) 0:01:58.615 ******** 2026-01-05 00:54:41.645057 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-05 00:54:41.645079 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-05 00:54:41.645166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-05 00:54:41.645192 | orchestrator | 2026-01-05 00:54:41.645204 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-01-05 00:54:41.645216 | orchestrator | Monday 05 January 2026 00:54:41 +0000 (0:00:03.076) 0:02:01.691 ******** 2026-01-05 00:54:41.645228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-05 00:54:41.645239 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:41.645250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-05 00:54:41.645262 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:41.645274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-05 00:54:41.645294 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:41.645307 | orchestrator | 2026-01-05 00:54:41.645320 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-01-05 00:54:41.645331 | orchestrator | Monday 05 January 2026 00:54:41 +0000 (0:00:00.404) 0:02:02.096 ******** 2026-01-05 00:54:41.645345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-01-05 00:54:41.645369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-01-05 00:54:50.739456 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:50.739670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-01-05 00:54:50.739724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-01-05 00:54:50.739748 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:50.739769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-01-05 00:54:50.739788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-01-05 00:54:50.739807 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:50.739825 | orchestrator | 2026-01-05 00:54:50.739847 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-01-05 00:54:50.739868 | orchestrator | Monday 05 January 2026 00:54:42 +0000 (0:00:00.872) 0:02:02.969 ******** 2026-01-05 00:54:50.739888 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:50.739907 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:50.739926 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:50.739941 | orchestrator | 2026-01-05 00:54:50.739954 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-01-05 00:54:50.739967 | orchestrator | Monday 05 January 2026 00:54:43 +0000 (0:00:01.388) 0:02:04.357 ******** 2026-01-05 00:54:50.739980 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:54:50.739993 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:54:50.740005 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:54:50.740019 | orchestrator | 2026-01-05 00:54:50.740032 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-01-05 00:54:50.740050 | orchestrator | Monday 05 January 2026 00:54:45 +0000 (0:00:02.139) 0:02:06.496 ******** 2026-01-05 00:54:50.740070 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:50.740089 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:50.740110 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:50.740130 | orchestrator | 2026-01-05 00:54:50.740150 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-01-05 00:54:50.740169 | orchestrator | Monday 05 January 2026 00:54:46 +0000 (0:00:00.319) 0:02:06.816 ******** 2026-01-05 00:54:50.740191 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:54:50.740206 | orchestrator | 2026-01-05 00:54:50.740220 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-01-05 00:54:50.740237 | orchestrator | Monday 05 January 2026 00:54:47 +0000 (0:00:01.138) 0:02:07.955 ******** 2026-01-05 00:54:50.740295 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-05 00:54:50.740366 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-05 00:54:50.740427 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-05 00:54:52.417937 | orchestrator | 2026-01-05 00:54:52.418085 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-01-05 00:54:52.418095 | orchestrator | Monday 05 January 2026 00:54:50 +0000 (0:00:03.436) 0:02:11.391 ******** 2026-01-05 00:54:52.418105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-05 00:54:52.418138 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:54:52.418173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-05 00:54:52.418183 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:54:52.418191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-05 00:54:52.418211 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:54:52.418219 | orchestrator | 2026-01-05 00:54:52.418226 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-01-05 00:54:52.418233 | orchestrator | Monday 05 January 2026 00:54:51 +0000 (0:00:00.671) 0:02:12.063 ******** 2026-01-05 00:54:52.418243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-05 00:54:52.418252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-05 00:54:52.418262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-05 00:54:52.418283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-05 00:55:01.525063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-05 00:55:01.525197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-05 00:55:01.525220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-05 00:55:01.525239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-05 00:55:01.525285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-05 00:55:01.525306 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:01.525323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-05 00:55:01.525338 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:01.525354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-05 00:55:01.525372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-05 00:55:01.525389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-05 00:55:01.525402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-05 00:55:01.525411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-05 00:55:01.525420 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:01.525429 | orchestrator | 2026-01-05 00:55:01.525439 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-01-05 00:55:01.525449 | orchestrator | Monday 05 January 2026 00:54:52 +0000 (0:00:01.005) 0:02:13.069 ******** 2026-01-05 00:55:01.525458 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:55:01.525467 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:55:01.525476 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:55:01.525484 | orchestrator | 2026-01-05 00:55:01.525494 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-01-05 00:55:01.525502 | orchestrator | Monday 05 January 2026 00:54:54 +0000 (0:00:01.672) 0:02:14.742 ******** 2026-01-05 00:55:01.525511 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:55:01.525520 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:55:01.525528 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:55:01.525564 | orchestrator | 2026-01-05 00:55:01.525597 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-01-05 00:55:01.525609 | orchestrator | Monday 05 January 2026 00:54:56 +0000 (0:00:02.159) 0:02:16.902 ******** 2026-01-05 00:55:01.525637 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:01.525648 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:01.525659 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:01.525669 | orchestrator | 2026-01-05 00:55:01.525679 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-01-05 00:55:01.525690 | orchestrator | Monday 05 January 2026 00:54:56 +0000 (0:00:00.321) 0:02:17.224 ******** 2026-01-05 00:55:01.525702 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:01.525721 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:01.525731 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:01.525742 | orchestrator | 2026-01-05 00:55:01.525753 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-01-05 00:55:01.525764 | orchestrator | Monday 05 January 2026 00:54:56 +0000 (0:00:00.371) 0:02:17.595 ******** 2026-01-05 00:55:01.525774 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:55:01.525784 | orchestrator | 2026-01-05 00:55:01.525795 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-01-05 00:55:01.525805 | orchestrator | Monday 05 January 2026 00:54:58 +0000 (0:00:01.196) 0:02:18.792 ******** 2026-01-05 00:55:01.525822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 00:55:01.525837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 00:55:01.525850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 00:55:01.525874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 00:55:02.216637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 00:55:02.216757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 00:55:02.216771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 00:55:02.216781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 00:55:02.216791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 00:55:02.216801 | orchestrator | 2026-01-05 00:55:02.216857 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-01-05 00:55:02.216870 | orchestrator | Monday 05 January 2026 00:55:01 +0000 (0:00:03.379) 0:02:22.172 ******** 2026-01-05 00:55:02.216918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-05 00:55:02.216948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 00:55:02.216958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 00:55:02.216968 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:02.216979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-05 00:55:02.216989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 00:55:02.217016 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 00:55:12.232611 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:12.232743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-05 00:55:12.232766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 00:55:12.232780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 00:55:12.232792 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:12.232804 | orchestrator | 2026-01-05 00:55:12.232816 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-01-05 00:55:12.232828 | orchestrator | Monday 05 January 2026 00:55:02 +0000 (0:00:00.687) 0:02:22.859 ******** 2026-01-05 00:55:12.232840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-05 00:55:12.232854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-05 00:55:12.232866 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:12.232898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-05 00:55:12.232911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-05 00:55:12.232922 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:12.232946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-05 00:55:12.232976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-05 00:55:12.232988 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:12.232999 | orchestrator | 2026-01-05 00:55:12.233011 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-01-05 00:55:12.233021 | orchestrator | Monday 05 January 2026 00:55:03 +0000 (0:00:01.190) 0:02:24.050 ******** 2026-01-05 00:55:12.233032 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:55:12.233043 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:55:12.233054 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:55:12.233065 | orchestrator | 2026-01-05 00:55:12.233076 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-01-05 00:55:12.233089 | orchestrator | Monday 05 January 2026 00:55:04 +0000 (0:00:01.412) 0:02:25.463 ******** 2026-01-05 00:55:12.233102 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:55:12.233116 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:55:12.233128 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:55:12.233142 | orchestrator | 2026-01-05 00:55:12.233154 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-01-05 00:55:12.233167 | orchestrator | Monday 05 January 2026 00:55:07 +0000 (0:00:02.198) 0:02:27.661 ******** 2026-01-05 00:55:12.233180 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:12.233192 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:12.233205 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:12.233217 | orchestrator | 2026-01-05 00:55:12.233230 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-01-05 00:55:12.233243 | orchestrator | Monday 05 January 2026 00:55:07 +0000 (0:00:00.326) 0:02:27.987 ******** 2026-01-05 00:55:12.233255 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:55:12.233268 | orchestrator | 2026-01-05 00:55:12.233280 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-01-05 00:55:12.233293 | orchestrator | Monday 05 January 2026 00:55:08 +0000 (0:00:01.243) 0:02:29.231 ******** 2026-01-05 00:55:12.233308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-05 00:55:12.233332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-05 00:55:12.233348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-05 00:55:12.233370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-05 00:55:13.922820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-05 00:55:13.922871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-05 00:55:13.922886 | orchestrator | 2026-01-05 00:55:13.922891 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-01-05 00:55:13.922894 | orchestrator | Monday 05 January 2026 00:55:12 +0000 (0:00:03.651) 0:02:32.882 ******** 2026-01-05 00:55:13.922899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-05 00:55:13.922904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-05 00:55:13.922908 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:13.922953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-05 00:55:13.922959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-05 00:55:13.922962 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:13.923091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-05 00:55:13.923096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-05 00:55:13.923100 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:13.923103 | orchestrator | 2026-01-05 00:55:13.923106 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-01-05 00:55:13.923136 | orchestrator | Monday 05 January 2026 00:55:12 +0000 (0:00:00.673) 0:02:33.556 ******** 2026-01-05 00:55:13.923142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-01-05 00:55:13.923146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-01-05 00:55:13.923151 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:13.923154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-01-05 00:55:13.923157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-01-05 00:55:13.923160 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:13.923164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-01-05 00:55:13.923170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-01-05 00:55:22.216150 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:22.216244 | orchestrator | 2026-01-05 00:55:22.216256 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-01-05 00:55:22.216264 | orchestrator | Monday 05 January 2026 00:55:13 +0000 (0:00:01.016) 0:02:34.573 ******** 2026-01-05 00:55:22.216271 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:55:22.216278 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:55:22.216286 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:55:22.216311 | orchestrator | 2026-01-05 00:55:22.216319 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-01-05 00:55:22.216327 | orchestrator | Monday 05 January 2026 00:55:15 +0000 (0:00:01.556) 0:02:36.129 ******** 2026-01-05 00:55:22.216333 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:55:22.216340 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:55:22.216347 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:55:22.216354 | orchestrator | 2026-01-05 00:55:22.216360 | orchestrator | TASK [include_role : manila] *************************************************** 2026-01-05 00:55:22.216367 | orchestrator | Monday 05 January 2026 00:55:17 +0000 (0:00:02.113) 0:02:38.243 ******** 2026-01-05 00:55:22.216374 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:55:22.216381 | orchestrator | 2026-01-05 00:55:22.216388 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-01-05 00:55:22.216395 | orchestrator | Monday 05 January 2026 00:55:18 +0000 (0:00:01.137) 0:02:39.380 ******** 2026-01-05 00:55:22.216404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-05 00:55:22.216416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:55:22.216436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-05 00:55:22.216445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-05 00:55:22.216474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-05 00:55:22.216481 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-05 00:55:22.216489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:55:22.216496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-05 00:55:22.216547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:55:22.216555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-05 00:55:22.216573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-05 00:55:23.256177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-05 00:55:23.256263 | orchestrator | 2026-01-05 00:55:23.256272 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-01-05 00:55:23.256278 | orchestrator | Monday 05 January 2026 00:55:22 +0000 (0:00:03.585) 0:02:42.965 ******** 2026-01-05 00:55:23.256286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-05 00:55:23.256294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:55:23.256314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-05 00:55:23.256322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-05 00:55:23.256343 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:23.256363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-05 00:55:23.256368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:55:23.256374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-05 00:55:23.256379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-05 00:55:23.256384 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:23.256392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-05 00:55:23.256409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:55:23.256418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-05 00:55:34.636913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-05 00:55:34.637029 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:34.637045 | orchestrator | 2026-01-05 00:55:34.637061 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-01-05 00:55:34.637084 | orchestrator | Monday 05 January 2026 00:55:23 +0000 (0:00:01.030) 0:02:43.996 ******** 2026-01-05 00:55:34.637103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-01-05 00:55:34.637119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-01-05 00:55:34.637134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-01-05 00:55:34.637152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-01-05 00:55:34.637164 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:34.637179 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:34.637193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-01-05 00:55:34.637207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-01-05 00:55:34.637246 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:34.637261 | orchestrator | 2026-01-05 00:55:34.637275 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-01-05 00:55:34.637306 | orchestrator | Monday 05 January 2026 00:55:24 +0000 (0:00:00.880) 0:02:44.877 ******** 2026-01-05 00:55:34.637321 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:55:34.637335 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:55:34.637349 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:55:34.637363 | orchestrator | 2026-01-05 00:55:34.637377 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-01-05 00:55:34.637393 | orchestrator | Monday 05 January 2026 00:55:25 +0000 (0:00:01.337) 0:02:46.214 ******** 2026-01-05 00:55:34.637407 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:55:34.637423 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:55:34.637438 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:55:34.637454 | orchestrator | 2026-01-05 00:55:34.637469 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-01-05 00:55:34.637549 | orchestrator | Monday 05 January 2026 00:55:27 +0000 (0:00:02.158) 0:02:48.373 ******** 2026-01-05 00:55:34.637561 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:55:34.637572 | orchestrator | 2026-01-05 00:55:34.637581 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-01-05 00:55:34.637589 | orchestrator | Monday 05 January 2026 00:55:29 +0000 (0:00:01.387) 0:02:49.761 ******** 2026-01-05 00:55:34.637599 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-05 00:55:34.637608 | orchestrator | 2026-01-05 00:55:34.637617 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-01-05 00:55:34.637625 | orchestrator | Monday 05 January 2026 00:55:32 +0000 (0:00:03.266) 0:02:53.027 ******** 2026-01-05 00:55:34.637660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 00:55:34.637674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-05 00:55:34.637695 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:34.637713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 00:55:34.637732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-05 00:55:37.013915 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:37.014100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 00:55:37.014167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-05 00:55:37.014180 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:37.014189 | orchestrator | 2026-01-05 00:55:37.014198 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-01-05 00:55:37.014207 | orchestrator | Monday 05 January 2026 00:55:34 +0000 (0:00:02.253) 0:02:55.281 ******** 2026-01-05 00:55:37.014238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 00:55:37.014288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-05 00:55:37.014306 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:37.014321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 00:55:37.014330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-05 00:55:37.014338 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:37.014355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 00:55:46.712294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-05 00:55:46.712415 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:46.712443 | orchestrator | 2026-01-05 00:55:46.712545 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-01-05 00:55:46.712582 | orchestrator | Monday 05 January 2026 00:55:36 +0000 (0:00:02.379) 0:02:57.661 ******** 2026-01-05 00:55:46.712599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-05 00:55:46.712617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-05 00:55:46.712632 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:46.712647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-05 00:55:46.712663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-05 00:55:46.712705 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:46.712720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-05 00:55:46.712755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-05 00:55:46.712770 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:46.712784 | orchestrator | 2026-01-05 00:55:46.712798 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-01-05 00:55:46.712812 | orchestrator | Monday 05 January 2026 00:55:39 +0000 (0:00:02.863) 0:03:00.524 ******** 2026-01-05 00:55:46.712826 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:55:46.712840 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:55:46.712854 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:55:46.712868 | orchestrator | 2026-01-05 00:55:46.712881 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-01-05 00:55:46.712896 | orchestrator | Monday 05 January 2026 00:55:42 +0000 (0:00:02.148) 0:03:02.673 ******** 2026-01-05 00:55:46.712909 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:46.712922 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:46.712936 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:46.712950 | orchestrator | 2026-01-05 00:55:46.712963 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-01-05 00:55:46.712976 | orchestrator | Monday 05 January 2026 00:55:43 +0000 (0:00:01.452) 0:03:04.126 ******** 2026-01-05 00:55:46.712989 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:46.713003 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:46.713016 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:46.713029 | orchestrator | 2026-01-05 00:55:46.713042 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-01-05 00:55:46.713056 | orchestrator | Monday 05 January 2026 00:55:43 +0000 (0:00:00.304) 0:03:04.430 ******** 2026-01-05 00:55:46.713069 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:55:46.713082 | orchestrator | 2026-01-05 00:55:46.713095 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-01-05 00:55:46.713109 | orchestrator | Monday 05 January 2026 00:55:45 +0000 (0:00:01.389) 0:03:05.819 ******** 2026-01-05 00:55:46.713123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-05 00:55:46.713150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-05 00:55:46.713166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-05 00:55:46.713180 | orchestrator | 2026-01-05 00:55:46.713193 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-01-05 00:55:46.713215 | orchestrator | Monday 05 January 2026 00:55:46 +0000 (0:00:01.543) 0:03:07.363 ******** 2026-01-05 00:55:55.218079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-05 00:55:55.218269 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:55.218290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-05 00:55:55.218299 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:55.218306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-05 00:55:55.218332 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:55.218339 | orchestrator | 2026-01-05 00:55:55.218346 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-01-05 00:55:55.218353 | orchestrator | Monday 05 January 2026 00:55:47 +0000 (0:00:00.393) 0:03:07.756 ******** 2026-01-05 00:55:55.218361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-05 00:55:55.218369 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:55.218376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-05 00:55:55.218383 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:55.218389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-05 00:55:55.218395 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:55.218401 | orchestrator | 2026-01-05 00:55:55.218408 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-01-05 00:55:55.218414 | orchestrator | Monday 05 January 2026 00:55:47 +0000 (0:00:00.869) 0:03:08.626 ******** 2026-01-05 00:55:55.218420 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:55.218426 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:55.218433 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:55.218483 | orchestrator | 2026-01-05 00:55:55.218491 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-01-05 00:55:55.218498 | orchestrator | Monday 05 January 2026 00:55:48 +0000 (0:00:00.462) 0:03:09.089 ******** 2026-01-05 00:55:55.218504 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:55.218511 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:55.218535 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:55.218542 | orchestrator | 2026-01-05 00:55:55.218549 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-01-05 00:55:55.218557 | orchestrator | Monday 05 January 2026 00:55:49 +0000 (0:00:01.293) 0:03:10.382 ******** 2026-01-05 00:55:55.218564 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:55.218571 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:55.218579 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:55:55.218586 | orchestrator | 2026-01-05 00:55:55.218593 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-01-05 00:55:55.218601 | orchestrator | Monday 05 January 2026 00:55:50 +0000 (0:00:00.322) 0:03:10.705 ******** 2026-01-05 00:55:55.218608 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:55:55.218615 | orchestrator | 2026-01-05 00:55:55.218622 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-01-05 00:55:55.218629 | orchestrator | Monday 05 January 2026 00:55:51 +0000 (0:00:01.462) 0:03:12.168 ******** 2026-01-05 00:55:55.218642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-05 00:55:55.218660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:55.218669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:55.218682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:55.218696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-05 00:55:55.426788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:55.426891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 00:55:55.426901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 00:55:55.426909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:55.426916 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-05 00:55:55.426924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 00:55:55.426947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:55.426963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:55.426970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-05 00:55:55.426976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:55.427003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 00:55:55.427010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:55.427025 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:55.563296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-05 00:55:55.563399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:55.563413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-05 00:55:55.563422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-05 00:55:55.563507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-05 00:55:55.563551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:55.563560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:55.563568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:55.563575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 00:55:55.563583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-05 00:55:55.563605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 00:55:55.563617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:55.761253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:55.761371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 00:55:55.761385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 00:55:55.761394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 00:55:55.761402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:55.761521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:55.761552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 00:55:55.761561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-05 00:55:55.761569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:55.761576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 00:55:55.761583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-05 00:55:55.761601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:55.761608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 00:55:55.761623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-05 00:55:56.847828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:56.847940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-05 00:55:56.847955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-05 00:55:56.848005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-05 00:55:56.848016 | orchestrator | 2026-01-05 00:55:56.848026 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-01-05 00:55:56.848036 | orchestrator | Monday 05 January 2026 00:55:55 +0000 (0:00:04.245) 0:03:16.413 ******** 2026-01-05 00:55:56.848046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-05 00:55:56.848075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:56.848087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:56.848104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:56.848118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-05 00:55:56.848129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:56.848145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 00:55:56.921277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-05 00:55:56.921379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 00:55:56.921418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:56.921499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:56.921514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:56.921567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 00:55:56.921579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:56.921595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:56.921609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-05 00:55:56.921619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-05 00:55:56.921629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:56.921646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 00:55:56.993112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-05 00:55:56.993249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 00:55:56.993268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:56.993306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:56.993321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 00:55:56.993353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-05 00:55:56.993366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:56.993384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:56.993396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-05 00:55:56.993534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:56.993552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 00:55:56.993562 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:55:56.993586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-05 00:55:57.200744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:57.200845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-05 00:55:57.200875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:57.200886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 00:55:57.200896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 00:55:57.200906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 00:55:57.200915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:57.200957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:57.200969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-05 00:55:57.200985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 00:55:57.200995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-05 00:55:57.201004 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:55:57.201014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-05 00:55:57.201035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-05 00:56:07.459113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 00:56:07.459224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-05 00:56:07.459255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-05 00:56:07.459269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-05 00:56:07.459280 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:07.459309 | orchestrator | 2026-01-05 00:56:07.459319 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-01-05 00:56:07.459329 | orchestrator | Monday 05 January 2026 00:55:57 +0000 (0:00:01.435) 0:03:17.849 ******** 2026-01-05 00:56:07.459338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-01-05 00:56:07.459348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-01-05 00:56:07.459358 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:07.459366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-01-05 00:56:07.459374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-01-05 00:56:07.459382 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:07.459405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-01-05 00:56:07.459414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-01-05 00:56:07.459455 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:07.459463 | orchestrator | 2026-01-05 00:56:07.459469 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-01-05 00:56:07.459476 | orchestrator | Monday 05 January 2026 00:55:59 +0000 (0:00:02.113) 0:03:19.962 ******** 2026-01-05 00:56:07.459483 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:56:07.459490 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:56:07.459497 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:56:07.459504 | orchestrator | 2026-01-05 00:56:07.459511 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-01-05 00:56:07.459517 | orchestrator | Monday 05 January 2026 00:56:00 +0000 (0:00:01.422) 0:03:21.384 ******** 2026-01-05 00:56:07.459524 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:56:07.459531 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:56:07.459537 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:56:07.459544 | orchestrator | 2026-01-05 00:56:07.459551 | orchestrator | TASK [include_role : placement] ************************************************ 2026-01-05 00:56:07.459558 | orchestrator | Monday 05 January 2026 00:56:02 +0000 (0:00:02.128) 0:03:23.512 ******** 2026-01-05 00:56:07.459565 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:56:07.459571 | orchestrator | 2026-01-05 00:56:07.459578 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-01-05 00:56:07.459585 | orchestrator | Monday 05 January 2026 00:56:04 +0000 (0:00:01.200) 0:03:24.713 ******** 2026-01-05 00:56:07.459599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-05 00:56:07.459624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-05 00:56:07.459637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-05 00:56:07.459648 | orchestrator | 2026-01-05 00:56:07.459661 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-01-05 00:56:07.459684 | orchestrator | Monday 05 January 2026 00:56:07 +0000 (0:00:03.392) 0:03:28.106 ******** 2026-01-05 00:56:18.403200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-05 00:56:18.403344 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:18.403390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-05 00:56:18.403484 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:18.403496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-05 00:56:18.403506 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:18.403515 | orchestrator | 2026-01-05 00:56:18.403526 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-01-05 00:56:18.403536 | orchestrator | Monday 05 January 2026 00:56:07 +0000 (0:00:00.468) 0:03:28.575 ******** 2026-01-05 00:56:18.403546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-05 00:56:18.403558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-05 00:56:18.403569 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:18.403578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-05 00:56:18.403587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-05 00:56:18.403597 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:18.403623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-05 00:56:18.403632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-05 00:56:18.403641 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:18.403650 | orchestrator | 2026-01-05 00:56:18.403659 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-01-05 00:56:18.403668 | orchestrator | Monday 05 January 2026 00:56:08 +0000 (0:00:00.698) 0:03:29.273 ******** 2026-01-05 00:56:18.403677 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:56:18.403686 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:56:18.403694 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:56:18.403703 | orchestrator | 2026-01-05 00:56:18.403712 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-01-05 00:56:18.403721 | orchestrator | Monday 05 January 2026 00:56:10 +0000 (0:00:01.987) 0:03:31.260 ******** 2026-01-05 00:56:18.403729 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:56:18.403738 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:56:18.403755 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:56:18.403764 | orchestrator | 2026-01-05 00:56:18.403773 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-01-05 00:56:18.403782 | orchestrator | Monday 05 January 2026 00:56:12 +0000 (0:00:01.930) 0:03:33.190 ******** 2026-01-05 00:56:18.403791 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:56:18.403799 | orchestrator | 2026-01-05 00:56:18.403808 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-01-05 00:56:18.403817 | orchestrator | Monday 05 January 2026 00:56:14 +0000 (0:00:01.566) 0:03:34.757 ******** 2026-01-05 00:56:18.403842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-05 00:56:18.403855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:56:18.403865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-05 00:56:18.403885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-05 00:56:19.214703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:56:19.214787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-05 00:56:19.214796 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-05 00:56:19.214802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:56:19.214806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-05 00:56:19.214827 | orchestrator | 2026-01-05 00:56:19.214833 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-01-05 00:56:19.214839 | orchestrator | Monday 05 January 2026 00:56:18 +0000 (0:00:04.298) 0:03:39.056 ******** 2026-01-05 00:56:19.214869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-05 00:56:19.214875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:56:19.214885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-05 00:56:19.214890 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:19.214895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-05 00:56:19.214908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:56:30.975615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-05 00:56:30.975709 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:30.975729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-05 00:56:30.975741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 00:56:30.975750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-05 00:56:30.975776 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:30.975785 | orchestrator | 2026-01-05 00:56:30.975795 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-01-05 00:56:30.975804 | orchestrator | Monday 05 January 2026 00:56:19 +0000 (0:00:00.808) 0:03:39.864 ******** 2026-01-05 00:56:30.975814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-05 00:56:30.975826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-05 00:56:30.975836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-05 00:56:30.975855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-05 00:56:30.975862 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:30.975867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-05 00:56:30.975877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-05 00:56:30.975889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-05 00:56:30.975894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-05 00:56:30.975900 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:30.975905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-05 00:56:30.975910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-05 00:56:30.975915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-05 00:56:30.975920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-05 00:56:30.975925 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:30.975931 | orchestrator | 2026-01-05 00:56:30.975936 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-01-05 00:56:30.975941 | orchestrator | Monday 05 January 2026 00:56:20 +0000 (0:00:01.111) 0:03:40.975 ******** 2026-01-05 00:56:30.975946 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:56:30.975951 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:56:30.975956 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:56:30.975961 | orchestrator | 2026-01-05 00:56:30.975966 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-01-05 00:56:30.975977 | orchestrator | Monday 05 January 2026 00:56:21 +0000 (0:00:01.432) 0:03:42.408 ******** 2026-01-05 00:56:30.975982 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:56:30.975987 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:56:30.975992 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:56:30.975997 | orchestrator | 2026-01-05 00:56:30.976002 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-01-05 00:56:30.976007 | orchestrator | Monday 05 January 2026 00:56:24 +0000 (0:00:02.276) 0:03:44.684 ******** 2026-01-05 00:56:30.976012 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:56:30.976017 | orchestrator | 2026-01-05 00:56:30.976022 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-01-05 00:56:30.976027 | orchestrator | Monday 05 January 2026 00:56:25 +0000 (0:00:01.780) 0:03:46.465 ******** 2026-01-05 00:56:30.976033 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-01-05 00:56:30.976046 | orchestrator | 2026-01-05 00:56:30.976051 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-01-05 00:56:30.976056 | orchestrator | Monday 05 January 2026 00:56:26 +0000 (0:00:00.939) 0:03:47.404 ******** 2026-01-05 00:56:30.976062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-05 00:56:30.976073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-05 00:56:43.269202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-05 00:56:43.269318 | orchestrator | 2026-01-05 00:56:43.269338 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-01-05 00:56:43.269403 | orchestrator | Monday 05 January 2026 00:56:30 +0000 (0:00:04.221) 0:03:51.626 ******** 2026-01-05 00:56:43.269427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-05 00:56:43.269449 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:43.269468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-05 00:56:43.269508 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:43.269519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-05 00:56:43.269530 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:43.269540 | orchestrator | 2026-01-05 00:56:43.269549 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-01-05 00:56:43.269559 | orchestrator | Monday 05 January 2026 00:56:32 +0000 (0:00:01.501) 0:03:53.128 ******** 2026-01-05 00:56:43.269570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-05 00:56:43.269582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-05 00:56:43.269593 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:43.269603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-05 00:56:43.269613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-05 00:56:43.269623 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:43.269632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-05 00:56:43.269643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-05 00:56:43.269669 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:43.269680 | orchestrator | 2026-01-05 00:56:43.269689 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-05 00:56:43.269700 | orchestrator | Monday 05 January 2026 00:56:34 +0000 (0:00:01.649) 0:03:54.778 ******** 2026-01-05 00:56:43.269712 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:56:43.269732 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:56:43.269744 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:56:43.269756 | orchestrator | 2026-01-05 00:56:43.269768 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-05 00:56:43.269779 | orchestrator | Monday 05 January 2026 00:56:36 +0000 (0:00:02.583) 0:03:57.362 ******** 2026-01-05 00:56:43.269790 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:56:43.269802 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:56:43.269813 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:56:43.269824 | orchestrator | 2026-01-05 00:56:43.269835 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-01-05 00:56:43.269855 | orchestrator | Monday 05 January 2026 00:56:39 +0000 (0:00:03.057) 0:04:00.420 ******** 2026-01-05 00:56:43.269867 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-01-05 00:56:43.269879 | orchestrator | 2026-01-05 00:56:43.269891 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-01-05 00:56:43.269902 | orchestrator | Monday 05 January 2026 00:56:40 +0000 (0:00:01.129) 0:04:01.549 ******** 2026-01-05 00:56:43.269914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-05 00:56:43.269926 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:43.269937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-05 00:56:43.269948 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:56:43.269960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-05 00:56:43.269971 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:56:43.269983 | orchestrator | 2026-01-05 00:56:43.269994 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-01-05 00:56:43.270006 | orchestrator | Monday 05 January 2026 00:56:41 +0000 (0:00:01.054) 0:04:02.603 ******** 2026-01-05 00:56:43.270066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-05 00:56:43.270079 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:56:43.270091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-05 00:56:43.270109 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:06.938858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-05 00:57:06.938981 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:06.938995 | orchestrator | 2026-01-05 00:57:06.939005 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-01-05 00:57:06.939014 | orchestrator | Monday 05 January 2026 00:56:43 +0000 (0:00:01.316) 0:04:03.919 ******** 2026-01-05 00:57:06.939023 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:06.939031 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:06.939039 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:06.939047 | orchestrator | 2026-01-05 00:57:06.939055 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-05 00:57:06.939063 | orchestrator | Monday 05 January 2026 00:56:44 +0000 (0:00:01.536) 0:04:05.456 ******** 2026-01-05 00:57:06.939071 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:06.939080 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:06.939088 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:06.939096 | orchestrator | 2026-01-05 00:57:06.939108 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-05 00:57:06.939121 | orchestrator | Monday 05 January 2026 00:56:47 +0000 (0:00:02.770) 0:04:08.227 ******** 2026-01-05 00:57:06.939134 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:06.939147 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:06.939161 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:06.939173 | orchestrator | 2026-01-05 00:57:06.939186 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-01-05 00:57:06.939199 | orchestrator | Monday 05 January 2026 00:56:50 +0000 (0:00:02.698) 0:04:10.925 ******** 2026-01-05 00:57:06.939212 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-01-05 00:57:06.939226 | orchestrator | 2026-01-05 00:57:06.939240 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-01-05 00:57:06.939253 | orchestrator | Monday 05 January 2026 00:56:51 +0000 (0:00:01.222) 0:04:12.147 ******** 2026-01-05 00:57:06.939266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-05 00:57:06.939275 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:06.939284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-05 00:57:06.939292 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:06.939300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-05 00:57:06.939350 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:06.939365 | orchestrator | 2026-01-05 00:57:06.939379 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-01-05 00:57:06.939397 | orchestrator | Monday 05 January 2026 00:56:52 +0000 (0:00:01.229) 0:04:13.377 ******** 2026-01-05 00:57:06.939433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-05 00:57:06.939451 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:06.939460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-05 00:57:06.939470 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:06.939479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-05 00:57:06.939489 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:06.939499 | orchestrator | 2026-01-05 00:57:06.939509 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-01-05 00:57:06.939518 | orchestrator | Monday 05 January 2026 00:56:54 +0000 (0:00:01.391) 0:04:14.768 ******** 2026-01-05 00:57:06.939530 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:06.939544 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:06.939558 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:06.939571 | orchestrator | 2026-01-05 00:57:06.939586 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-05 00:57:06.939601 | orchestrator | Monday 05 January 2026 00:56:56 +0000 (0:00:01.922) 0:04:16.691 ******** 2026-01-05 00:57:06.939614 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:06.939629 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:06.939639 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:06.939648 | orchestrator | 2026-01-05 00:57:06.939658 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-05 00:57:06.939667 | orchestrator | Monday 05 January 2026 00:56:58 +0000 (0:00:02.486) 0:04:19.177 ******** 2026-01-05 00:57:06.939677 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:57:06.939686 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:57:06.939695 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:57:06.939704 | orchestrator | 2026-01-05 00:57:06.939714 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-01-05 00:57:06.939724 | orchestrator | Monday 05 January 2026 00:57:01 +0000 (0:00:03.432) 0:04:22.610 ******** 2026-01-05 00:57:06.939742 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:57:06.939750 | orchestrator | 2026-01-05 00:57:06.939758 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-01-05 00:57:06.939766 | orchestrator | Monday 05 January 2026 00:57:03 +0000 (0:00:01.333) 0:04:23.944 ******** 2026-01-05 00:57:06.939780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-05 00:57:06.939797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-05 00:57:06.939827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-05 00:57:07.664077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-05 00:57:07.664186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:57:07.664198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-05 00:57:07.664236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-05 00:57:07.664245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-05 00:57:07.665758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-05 00:57:07.665927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:57:07.665939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-05 00:57:07.665968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-05 00:57:07.665978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-05 00:57:07.665987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-05 00:57:07.665999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:57:07.666008 | orchestrator | 2026-01-05 00:57:07.666100 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-01-05 00:57:07.666110 | orchestrator | Monday 05 January 2026 00:57:07 +0000 (0:00:03.783) 0:04:27.727 ******** 2026-01-05 00:57:07.666132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-05 00:57:07.803442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-05 00:57:07.803592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-05 00:57:07.803608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-05 00:57:07.803622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-05 00:57:07.803654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:57:07.803666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-05 00:57:07.803679 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:07.803714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-05 00:57:07.803734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-05 00:57:07.803762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:57:07.803773 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:07.803797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-05 00:57:07.803815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-05 00:57:07.803827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-05 00:57:07.803847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-05 00:57:19.722338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-05 00:57:19.722408 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:19.722420 | orchestrator | 2026-01-05 00:57:19.722428 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-01-05 00:57:19.722436 | orchestrator | Monday 05 January 2026 00:57:07 +0000 (0:00:00.732) 0:04:28.459 ******** 2026-01-05 00:57:19.722443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-05 00:57:19.722452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-05 00:57:19.722458 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:19.722465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-05 00:57:19.722471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-05 00:57:19.722478 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:19.722485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-05 00:57:19.722491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-05 00:57:19.722498 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:19.722505 | orchestrator | 2026-01-05 00:57:19.722511 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-01-05 00:57:19.722515 | orchestrator | Monday 05 January 2026 00:57:08 +0000 (0:00:00.911) 0:04:29.371 ******** 2026-01-05 00:57:19.722519 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:57:19.722523 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:57:19.722527 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:57:19.722531 | orchestrator | 2026-01-05 00:57:19.722543 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-01-05 00:57:19.722547 | orchestrator | Monday 05 January 2026 00:57:10 +0000 (0:00:01.831) 0:04:31.203 ******** 2026-01-05 00:57:19.722551 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:57:19.722555 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:57:19.722559 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:57:19.722573 | orchestrator | 2026-01-05 00:57:19.722577 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-01-05 00:57:19.722581 | orchestrator | Monday 05 January 2026 00:57:12 +0000 (0:00:02.232) 0:04:33.435 ******** 2026-01-05 00:57:19.722585 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:57:19.722589 | orchestrator | 2026-01-05 00:57:19.722593 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-01-05 00:57:19.722596 | orchestrator | Monday 05 January 2026 00:57:14 +0000 (0:00:01.389) 0:04:34.825 ******** 2026-01-05 00:57:19.722602 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-05 00:57:19.722619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-05 00:57:19.722623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-05 00:57:19.722632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-05 00:57:19.722644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-05 00:57:19.722657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-05 00:57:21.784867 | orchestrator | 2026-01-05 00:57:21.784920 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-01-05 00:57:21.784928 | orchestrator | Monday 05 January 2026 00:57:19 +0000 (0:00:05.542) 0:04:40.367 ******** 2026-01-05 00:57:21.784934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-05 00:57:21.784949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-05 00:57:21.784965 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.784970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-05 00:57:21.784975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-05 00:57:21.784988 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.784992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-05 00:57:21.785004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-05 00:57:21.785011 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:21.785015 | orchestrator | 2026-01-05 00:57:21.785020 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-01-05 00:57:21.785026 | orchestrator | Monday 05 January 2026 00:57:20 +0000 (0:00:01.139) 0:04:41.507 ******** 2026-01-05 00:57:21.785033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-01-05 00:57:21.785044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-05 00:57:21.785053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-05 00:57:21.785060 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:21.785066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-01-05 00:57:21.785072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-05 00:57:21.785079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-05 00:57:21.785086 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:21.785092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-01-05 00:57:21.785098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-05 00:57:21.785113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-05 00:57:28.401577 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:28.401682 | orchestrator | 2026-01-05 00:57:28.401694 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-01-05 00:57:28.401704 | orchestrator | Monday 05 January 2026 00:57:21 +0000 (0:00:00.931) 0:04:42.438 ******** 2026-01-05 00:57:28.401711 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:28.401719 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:28.401726 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:28.401732 | orchestrator | 2026-01-05 00:57:28.401740 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-01-05 00:57:28.401769 | orchestrator | Monday 05 January 2026 00:57:22 +0000 (0:00:00.448) 0:04:42.887 ******** 2026-01-05 00:57:28.401776 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:28.401783 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:28.401789 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:28.401796 | orchestrator | 2026-01-05 00:57:28.401803 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-01-05 00:57:28.401810 | orchestrator | Monday 05 January 2026 00:57:24 +0000 (0:00:01.810) 0:04:44.698 ******** 2026-01-05 00:57:28.401817 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:57:28.401825 | orchestrator | 2026-01-05 00:57:28.401832 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-01-05 00:57:28.401839 | orchestrator | Monday 05 January 2026 00:57:25 +0000 (0:00:01.757) 0:04:46.455 ******** 2026-01-05 00:57:28.401861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-05 00:57:28.401873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 00:57:28.401881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:57:28.401889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:57:28.401896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 00:57:28.401919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-05 00:57:28.401933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 00:57:28.401944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:57:28.401951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:57:28.401958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 00:57:28.401965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-05 00:57:28.401972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 00:57:28.401989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:57:30.170071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:57:30.170152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 00:57:30.170172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-05 00:57:30.170187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-05 00:57:30.170192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:57:30.170216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:57:30.170233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-05 00:57:30.170240 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-05 00:57:30.170245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-05 00:57:30.170273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:57:30.170281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:57:30.170293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-05 00:57:30.170307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-05 00:57:30.910482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-05 00:57:30.910577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:57:30.910589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:57:30.910597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-05 00:57:30.910623 | orchestrator | 2026-01-05 00:57:30.910632 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-01-05 00:57:30.910640 | orchestrator | Monday 05 January 2026 00:57:30 +0000 (0:00:04.538) 0:04:50.994 ******** 2026-01-05 00:57:30.910651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-05 00:57:30.910662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 00:57:30.910690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:57:30.910707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:57:30.910719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 00:57:30.910734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-05 00:57:30.910766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-05 00:57:30.910783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:57:30.910798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:57:31.035100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-05 00:57:31.035187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-05 00:57:31.035196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 00:57:31.035228 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:31.035303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:57:31.035312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:57:31.035319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-05 00:57:31.035348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 00:57:31.035356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 00:57:31.035364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-05 00:57:31.035391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:57:31.035398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:57:31.035406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-05 00:57:31.035417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 00:57:33.042176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:57:33.042333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-05 00:57:33.042387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:57:33.042399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-05 00:57:33.042408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-05 00:57:33.042415 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:33.042424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:57:33.042457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 00:57:33.042466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-05 00:57:33.042522 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:33.042528 | orchestrator | 2026-01-05 00:57:33.042533 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-01-05 00:57:33.042537 | orchestrator | Monday 05 January 2026 00:57:31 +0000 (0:00:00.843) 0:04:51.838 ******** 2026-01-05 00:57:33.042542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-01-05 00:57:33.042558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-01-05 00:57:33.042565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-05 00:57:33.042572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-05 00:57:33.042577 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:33.042581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-01-05 00:57:33.042585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-01-05 00:57:33.042589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-05 00:57:33.042593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-05 00:57:33.042597 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:33.042601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-01-05 00:57:33.042605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-01-05 00:57:33.042609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-05 00:57:33.042620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-05 00:57:40.821717 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:40.821851 | orchestrator | 2026-01-05 00:57:40.821861 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-01-05 00:57:40.821867 | orchestrator | Monday 05 January 2026 00:57:33 +0000 (0:00:01.844) 0:04:53.682 ******** 2026-01-05 00:57:40.821871 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:40.821907 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:40.821912 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:40.821916 | orchestrator | 2026-01-05 00:57:40.821921 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-01-05 00:57:40.821925 | orchestrator | Monday 05 January 2026 00:57:33 +0000 (0:00:00.463) 0:04:54.146 ******** 2026-01-05 00:57:40.821932 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:40.821938 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:40.821943 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:40.821949 | orchestrator | 2026-01-05 00:57:40.821955 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-01-05 00:57:40.821961 | orchestrator | Monday 05 January 2026 00:57:34 +0000 (0:00:01.335) 0:04:55.481 ******** 2026-01-05 00:57:40.821966 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:57:40.821972 | orchestrator | 2026-01-05 00:57:40.821979 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-01-05 00:57:40.821985 | orchestrator | Monday 05 January 2026 00:57:36 +0000 (0:00:01.790) 0:04:57.271 ******** 2026-01-05 00:57:40.821995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-05 00:57:40.822006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-05 00:57:40.822074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-05 00:57:40.822091 | orchestrator | 2026-01-05 00:57:40.822098 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-01-05 00:57:40.822122 | orchestrator | Monday 05 January 2026 00:57:38 +0000 (0:00:02.231) 0:04:59.503 ******** 2026-01-05 00:57:40.822129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-05 00:57:40.822135 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:40.822141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-05 00:57:40.822148 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:40.822154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-05 00:57:40.822160 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:40.822166 | orchestrator | 2026-01-05 00:57:40.822172 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-01-05 00:57:40.822185 | orchestrator | Monday 05 January 2026 00:57:39 +0000 (0:00:00.422) 0:04:59.925 ******** 2026-01-05 00:57:40.822193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-05 00:57:40.822202 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:40.822208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-05 00:57:40.822214 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:40.822307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-05 00:57:40.822325 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:40.822330 | orchestrator | 2026-01-05 00:57:40.822340 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-01-05 00:57:40.822345 | orchestrator | Monday 05 January 2026 00:57:40 +0000 (0:00:00.756) 0:05:00.682 ******** 2026-01-05 00:57:40.822357 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:51.129867 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:51.129972 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:51.129982 | orchestrator | 2026-01-05 00:57:51.129990 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-01-05 00:57:51.129999 | orchestrator | Monday 05 January 2026 00:57:40 +0000 (0:00:00.797) 0:05:01.479 ******** 2026-01-05 00:57:51.130006 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:51.130055 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:51.130063 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:51.130070 | orchestrator | 2026-01-05 00:57:51.130077 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-01-05 00:57:51.130083 | orchestrator | Monday 05 January 2026 00:57:42 +0000 (0:00:01.202) 0:05:02.682 ******** 2026-01-05 00:57:51.130113 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:57:51.130121 | orchestrator | 2026-01-05 00:57:51.130128 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-01-05 00:57:51.130135 | orchestrator | Monday 05 January 2026 00:57:43 +0000 (0:00:01.436) 0:05:04.118 ******** 2026-01-05 00:57:51.130145 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-05 00:57:51.130155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-05 00:57:51.130183 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-05 00:57:51.130256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-05 00:57:51.130267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-05 00:57:51.130274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-05 00:57:51.130292 | orchestrator | 2026-01-05 00:57:51.130303 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-01-05 00:57:51.130314 | orchestrator | Monday 05 January 2026 00:57:49 +0000 (0:00:06.519) 0:05:10.638 ******** 2026-01-05 00:57:51.130324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-01-05 00:57:51.130363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-01-05 00:57:57.334257 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:57.334346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-01-05 00:57:57.334355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-01-05 00:57:57.334376 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:57.334381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-01-05 00:57:57.334397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-01-05 00:57:57.334401 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:57.334405 | orchestrator | 2026-01-05 00:57:57.334452 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-01-05 00:57:57.334459 | orchestrator | Monday 05 January 2026 00:57:51 +0000 (0:00:01.142) 0:05:11.781 ******** 2026-01-05 00:57:57.334478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-05 00:57:57.334484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-05 00:57:57.334491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-05 00:57:57.334495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-05 00:57:57.334499 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:57.334503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-05 00:57:57.334507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-05 00:57:57.334516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-05 00:57:57.334520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-05 00:57:57.334524 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:57.334528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-05 00:57:57.334532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-05 00:57:57.334536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-05 00:57:57.334540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-05 00:57:57.334544 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:57.334547 | orchestrator | 2026-01-05 00:57:57.334552 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-01-05 00:57:57.334555 | orchestrator | Monday 05 January 2026 00:57:52 +0000 (0:00:00.975) 0:05:12.756 ******** 2026-01-05 00:57:57.334559 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:57:57.334563 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:57:57.334567 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:57:57.334571 | orchestrator | 2026-01-05 00:57:57.334575 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-01-05 00:57:57.334584 | orchestrator | Monday 05 January 2026 00:57:53 +0000 (0:00:01.345) 0:05:14.102 ******** 2026-01-05 00:57:57.334594 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:57:57.334598 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:57:57.334602 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:57:57.334605 | orchestrator | 2026-01-05 00:57:57.334609 | orchestrator | TASK [include_role : swift] **************************************************** 2026-01-05 00:57:57.334615 | orchestrator | Monday 05 January 2026 00:57:55 +0000 (0:00:02.437) 0:05:16.540 ******** 2026-01-05 00:57:57.334620 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:57.334626 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:57.334632 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:57.334638 | orchestrator | 2026-01-05 00:57:57.334648 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-01-05 00:57:57.334654 | orchestrator | Monday 05 January 2026 00:57:56 +0000 (0:00:00.672) 0:05:17.213 ******** 2026-01-05 00:57:57.334660 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:57.334665 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:57:57.334671 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:57:57.334676 | orchestrator | 2026-01-05 00:57:57.334682 | orchestrator | TASK [include_role : trove] **************************************************** 2026-01-05 00:57:57.334687 | orchestrator | Monday 05 January 2026 00:57:56 +0000 (0:00:00.439) 0:05:17.652 ******** 2026-01-05 00:57:57.334693 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:57:57.334704 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:58:41.582626 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:58:41.582710 | orchestrator | 2026-01-05 00:58:41.582719 | orchestrator | TASK [include_role : venus] **************************************************** 2026-01-05 00:58:41.582740 | orchestrator | Monday 05 January 2026 00:57:57 +0000 (0:00:00.339) 0:05:17.991 ******** 2026-01-05 00:58:41.582746 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:58:41.582751 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:58:41.582755 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:58:41.582760 | orchestrator | 2026-01-05 00:58:41.582765 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-01-05 00:58:41.582770 | orchestrator | Monday 05 January 2026 00:57:57 +0000 (0:00:00.345) 0:05:18.336 ******** 2026-01-05 00:58:41.582775 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:58:41.582780 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:58:41.582784 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:58:41.582789 | orchestrator | 2026-01-05 00:58:41.582793 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-01-05 00:58:41.582798 | orchestrator | Monday 05 January 2026 00:57:58 +0000 (0:00:00.742) 0:05:19.079 ******** 2026-01-05 00:58:41.582803 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:58:41.582807 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:58:41.582812 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:58:41.582816 | orchestrator | 2026-01-05 00:58:41.582821 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-01-05 00:58:41.582825 | orchestrator | Monday 05 January 2026 00:57:59 +0000 (0:00:00.586) 0:05:19.665 ******** 2026-01-05 00:58:41.582830 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:58:41.582836 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:58:41.582840 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:58:41.582845 | orchestrator | 2026-01-05 00:58:41.582849 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-01-05 00:58:41.582854 | orchestrator | Monday 05 January 2026 00:57:59 +0000 (0:00:00.767) 0:05:20.432 ******** 2026-01-05 00:58:41.582858 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:58:41.582863 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:58:41.582868 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:58:41.582872 | orchestrator | 2026-01-05 00:58:41.582877 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-01-05 00:58:41.582881 | orchestrator | Monday 05 January 2026 00:58:00 +0000 (0:00:00.370) 0:05:20.802 ******** 2026-01-05 00:58:41.582886 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:58:41.582890 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:58:41.582895 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:58:41.582899 | orchestrator | 2026-01-05 00:58:41.582904 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-01-05 00:58:41.582908 | orchestrator | Monday 05 January 2026 00:58:01 +0000 (0:00:01.406) 0:05:22.209 ******** 2026-01-05 00:58:41.582913 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:58:41.582917 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:58:41.582922 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:58:41.582926 | orchestrator | 2026-01-05 00:58:41.582931 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-01-05 00:58:41.582936 | orchestrator | Monday 05 January 2026 00:58:02 +0000 (0:00:00.932) 0:05:23.142 ******** 2026-01-05 00:58:41.582940 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:58:41.582945 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:58:41.582949 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:58:41.582954 | orchestrator | 2026-01-05 00:58:41.582959 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-01-05 00:58:41.582964 | orchestrator | Monday 05 January 2026 00:58:03 +0000 (0:00:00.922) 0:05:24.064 ******** 2026-01-05 00:58:41.582968 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:58:41.582973 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:58:41.582977 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:58:41.582982 | orchestrator | 2026-01-05 00:58:41.582986 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-01-05 00:58:41.582991 | orchestrator | Monday 05 January 2026 00:58:12 +0000 (0:00:09.447) 0:05:33.512 ******** 2026-01-05 00:58:41.582996 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:58:41.583007 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:58:41.583011 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:58:41.583016 | orchestrator | 2026-01-05 00:58:41.583021 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-01-05 00:58:41.583025 | orchestrator | Monday 05 January 2026 00:58:14 +0000 (0:00:01.257) 0:05:34.769 ******** 2026-01-05 00:58:41.583030 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:58:41.583034 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:58:41.583039 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:58:41.583044 | orchestrator | 2026-01-05 00:58:41.583048 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-01-05 00:58:41.583053 | orchestrator | Monday 05 January 2026 00:58:27 +0000 (0:00:12.941) 0:05:47.711 ******** 2026-01-05 00:58:41.583057 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:58:41.583062 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:58:41.583066 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:58:41.583071 | orchestrator | 2026-01-05 00:58:41.583075 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-01-05 00:58:41.583080 | orchestrator | Monday 05 January 2026 00:58:27 +0000 (0:00:00.827) 0:05:48.539 ******** 2026-01-05 00:58:41.583084 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:58:41.583089 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:58:41.583138 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:58:41.583148 | orchestrator | 2026-01-05 00:58:41.583156 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-01-05 00:58:41.583177 | orchestrator | Monday 05 January 2026 00:58:35 +0000 (0:00:08.058) 0:05:56.597 ******** 2026-01-05 00:58:41.583186 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:58:41.583192 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:58:41.583197 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:58:41.583203 | orchestrator | 2026-01-05 00:58:41.583209 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-01-05 00:58:41.583214 | orchestrator | Monday 05 January 2026 00:58:36 +0000 (0:00:00.736) 0:05:57.334 ******** 2026-01-05 00:58:41.583220 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:58:41.583225 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:58:41.583230 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:58:41.583238 | orchestrator | 2026-01-05 00:58:41.583263 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-01-05 00:58:41.583272 | orchestrator | Monday 05 January 2026 00:58:37 +0000 (0:00:00.382) 0:05:57.717 ******** 2026-01-05 00:58:41.583279 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:58:41.583287 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:58:41.583294 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:58:41.583301 | orchestrator | 2026-01-05 00:58:41.583310 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-01-05 00:58:41.583317 | orchestrator | Monday 05 January 2026 00:58:37 +0000 (0:00:00.378) 0:05:58.095 ******** 2026-01-05 00:58:41.583326 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:58:41.583334 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:58:41.583341 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:58:41.583349 | orchestrator | 2026-01-05 00:58:41.583357 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-01-05 00:58:41.583363 | orchestrator | Monday 05 January 2026 00:58:37 +0000 (0:00:00.347) 0:05:58.443 ******** 2026-01-05 00:58:41.583369 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:58:41.583375 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:58:41.583381 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:58:41.583387 | orchestrator | 2026-01-05 00:58:41.583393 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-01-05 00:58:41.583399 | orchestrator | Monday 05 January 2026 00:58:38 +0000 (0:00:00.701) 0:05:59.144 ******** 2026-01-05 00:58:41.583406 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:58:41.583412 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:58:41.583425 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:58:41.583430 | orchestrator | 2026-01-05 00:58:41.583436 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-01-05 00:58:41.583442 | orchestrator | Monday 05 January 2026 00:58:38 +0000 (0:00:00.383) 0:05:59.528 ******** 2026-01-05 00:58:41.583448 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:58:41.583454 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:58:41.583460 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:58:41.583466 | orchestrator | 2026-01-05 00:58:41.583472 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-01-05 00:58:41.583478 | orchestrator | Monday 05 January 2026 00:58:39 +0000 (0:00:00.946) 0:06:00.475 ******** 2026-01-05 00:58:41.583484 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:58:41.583490 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:58:41.583496 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:58:41.583502 | orchestrator | 2026-01-05 00:58:41.583508 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 00:58:41.583516 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-01-05 00:58:41.583524 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-01-05 00:58:41.583530 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-01-05 00:58:41.583536 | orchestrator | 2026-01-05 00:58:41.583542 | orchestrator | 2026-01-05 00:58:41.583548 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 00:58:41.583554 | orchestrator | Monday 05 January 2026 00:58:40 +0000 (0:00:00.877) 0:06:01.352 ******** 2026-01-05 00:58:41.583559 | orchestrator | =============================================================================== 2026-01-05 00:58:41.583564 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 12.94s 2026-01-05 00:58:41.583569 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 9.45s 2026-01-05 00:58:41.583574 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 8.06s 2026-01-05 00:58:41.583579 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.52s 2026-01-05 00:58:41.583584 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.54s 2026-01-05 00:58:41.583589 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.54s 2026-01-05 00:58:41.583594 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.33s 2026-01-05 00:58:41.583600 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.30s 2026-01-05 00:58:41.583605 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.25s 2026-01-05 00:58:41.583610 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.22s 2026-01-05 00:58:41.583615 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 3.78s 2026-01-05 00:58:41.583620 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 3.65s 2026-01-05 00:58:41.583625 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 3.60s 2026-01-05 00:58:41.583630 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 3.59s 2026-01-05 00:58:41.583635 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 3.59s 2026-01-05 00:58:41.583644 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 3.52s 2026-01-05 00:58:41.583650 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 3.51s 2026-01-05 00:58:41.583655 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 3.45s 2026-01-05 00:58:41.583660 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 3.44s 2026-01-05 00:58:41.583670 | orchestrator | proxysql-config : Copying over nova-cell ProxySQL rules config ---------- 3.43s 2026-01-05 00:58:44.005606 | orchestrator | 2026-01-05 00:58:44 | INFO  | Task 25ab3d60-b255-4432-992d-12ba95a2776b (opensearch) was prepared for execution. 2026-01-05 00:58:44.005860 | orchestrator | 2026-01-05 00:58:44 | INFO  | It takes a moment until task 25ab3d60-b255-4432-992d-12ba95a2776b (opensearch) has been started and output is visible here. 2026-01-05 00:58:56.024304 | orchestrator | 2026-01-05 00:58:56.024445 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 00:58:56.024465 | orchestrator | 2026-01-05 00:58:56.024480 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 00:58:56.024494 | orchestrator | Monday 05 January 2026 00:58:48 +0000 (0:00:00.266) 0:00:00.266 ******** 2026-01-05 00:58:56.024508 | orchestrator | ok: [testbed-node-0] 2026-01-05 00:58:56.024523 | orchestrator | ok: [testbed-node-1] 2026-01-05 00:58:56.024539 | orchestrator | ok: [testbed-node-2] 2026-01-05 00:58:56.024548 | orchestrator | 2026-01-05 00:58:56.024556 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 00:58:56.024564 | orchestrator | Monday 05 January 2026 00:58:48 +0000 (0:00:00.316) 0:00:00.582 ******** 2026-01-05 00:58:56.024574 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-01-05 00:58:56.024582 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-01-05 00:58:56.024591 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-01-05 00:58:56.024599 | orchestrator | 2026-01-05 00:58:56.024607 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-01-05 00:58:56.024615 | orchestrator | 2026-01-05 00:58:56.024629 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-05 00:58:56.024643 | orchestrator | Monday 05 January 2026 00:58:49 +0000 (0:00:00.455) 0:00:01.038 ******** 2026-01-05 00:58:56.024657 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:58:56.024670 | orchestrator | 2026-01-05 00:58:56.024683 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-01-05 00:58:56.024697 | orchestrator | Monday 05 January 2026 00:58:49 +0000 (0:00:00.507) 0:00:01.546 ******** 2026-01-05 00:58:56.024711 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-05 00:58:56.024725 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-05 00:58:56.024739 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-05 00:58:56.024752 | orchestrator | 2026-01-05 00:58:56.024764 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-01-05 00:58:56.024778 | orchestrator | Monday 05 January 2026 00:58:51 +0000 (0:00:01.726) 0:00:03.272 ******** 2026-01-05 00:58:56.024797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-05 00:58:56.024817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-05 00:58:56.024900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-05 00:58:56.024921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-05 00:58:56.024938 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-05 00:58:56.024955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-05 00:58:56.024981 | orchestrator | 2026-01-05 00:58:56.024995 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-05 00:58:56.025016 | orchestrator | Monday 05 January 2026 00:58:52 +0000 (0:00:01.702) 0:00:04.974 ******** 2026-01-05 00:58:56.025030 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 00:58:56.025044 | orchestrator | 2026-01-05 00:58:56.025058 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-01-05 00:58:56.025098 | orchestrator | Monday 05 January 2026 00:58:53 +0000 (0:00:00.542) 0:00:05.516 ******** 2026-01-05 00:58:56.025189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-05 00:58:56.838421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-05 00:58:56.838526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-05 00:58:56.838539 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-05 00:58:56.838637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-05 00:58:56.838667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-05 00:58:56.838675 | orchestrator | 2026-01-05 00:58:56.838684 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-01-05 00:58:56.838691 | orchestrator | Monday 05 January 2026 00:58:56 +0000 (0:00:02.518) 0:00:08.034 ******** 2026-01-05 00:58:56.838699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-05 00:58:56.838717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-05 00:58:56.838724 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:58:56.838732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-05 00:58:56.838747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-05 00:58:57.916013 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:58:57.916147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-05 00:58:57.916193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-05 00:58:57.916208 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:58:57.916220 | orchestrator | 2026-01-05 00:58:57.916231 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-01-05 00:58:57.916257 | orchestrator | Monday 05 January 2026 00:58:56 +0000 (0:00:00.814) 0:00:08.848 ******** 2026-01-05 00:58:57.916267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-05 00:58:57.916279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-05 00:58:57.916305 | orchestrator | skipping: [testbed-node-0] 2026-01-05 00:58:57.916316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-05 00:58:57.916335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-05 00:58:57.916346 | orchestrator | skipping: [testbed-node-1] 2026-01-05 00:58:57.916361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-05 00:58:57.916372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-05 00:58:57.916382 | orchestrator | skipping: [testbed-node-2] 2026-01-05 00:58:57.916421 | orchestrator | 2026-01-05 00:58:57.916432 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-01-05 00:58:57.916450 | orchestrator | Monday 05 January 2026 00:58:57 +0000 (0:00:01.071) 0:00:09.920 ******** 2026-01-05 00:59:06.521081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-05 00:59:06.522360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-05 00:59:06.522415 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-05 00:59:06.522424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-05 00:59:06.522450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-05 00:59:06.522462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-05 00:59:06.522467 | orchestrator | 2026-01-05 00:59:06.522474 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-01-05 00:59:06.522483 | orchestrator | Monday 05 January 2026 00:59:00 +0000 (0:00:02.394) 0:00:12.315 ******** 2026-01-05 00:59:06.522490 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:59:06.522497 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:59:06.522505 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:59:06.522510 | orchestrator | 2026-01-05 00:59:06.522514 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-01-05 00:59:06.522522 | orchestrator | Monday 05 January 2026 00:59:02 +0000 (0:00:02.400) 0:00:14.716 ******** 2026-01-05 00:59:06.522526 | orchestrator | changed: [testbed-node-0] 2026-01-05 00:59:06.522531 | orchestrator | changed: [testbed-node-1] 2026-01-05 00:59:06.522535 | orchestrator | changed: [testbed-node-2] 2026-01-05 00:59:06.522539 | orchestrator | 2026-01-05 00:59:06.522543 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-01-05 00:59:06.522550 | orchestrator | Monday 05 January 2026 00:59:04 +0000 (0:00:01.844) 0:00:16.560 ******** 2026-01-05 00:59:06.522556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-05 00:59:06.522564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-05 00:59:06.522582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-05 01:01:18.857866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-05 01:01:18.857982 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-05 01:01:18.857996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-05 01:01:18.858069 | orchestrator | 2026-01-05 01:01:18.858081 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-05 01:01:18.858090 | orchestrator | Monday 05 January 2026 00:59:06 +0000 (0:00:01.973) 0:00:18.533 ******** 2026-01-05 01:01:18.858098 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:18.858106 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:01:18.858216 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:01:18.858225 | orchestrator | 2026-01-05 01:01:18.858233 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-05 01:01:18.858240 | orchestrator | Monday 05 January 2026 00:59:06 +0000 (0:00:00.289) 0:00:18.823 ******** 2026-01-05 01:01:18.858247 | orchestrator | 2026-01-05 01:01:18.858255 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-05 01:01:18.858262 | orchestrator | Monday 05 January 2026 00:59:06 +0000 (0:00:00.077) 0:00:18.900 ******** 2026-01-05 01:01:18.858269 | orchestrator | 2026-01-05 01:01:18.858276 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-05 01:01:18.858284 | orchestrator | Monday 05 January 2026 00:59:06 +0000 (0:00:00.067) 0:00:18.967 ******** 2026-01-05 01:01:18.858291 | orchestrator | 2026-01-05 01:01:18.858298 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-01-05 01:01:18.858321 | orchestrator | Monday 05 January 2026 00:59:07 +0000 (0:00:00.080) 0:00:19.048 ******** 2026-01-05 01:01:18.858329 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:18.858337 | orchestrator | 2026-01-05 01:01:18.858344 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-01-05 01:01:18.858353 | orchestrator | Monday 05 January 2026 00:59:07 +0000 (0:00:00.229) 0:00:19.277 ******** 2026-01-05 01:01:18.858362 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:01:18.858370 | orchestrator | 2026-01-05 01:01:18.858379 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-01-05 01:01:18.858391 | orchestrator | Monday 05 January 2026 00:59:07 +0000 (0:00:00.696) 0:00:19.974 ******** 2026-01-05 01:01:18.858404 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:01:18.858416 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:01:18.858428 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:01:18.858439 | orchestrator | 2026-01-05 01:01:18.858450 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-01-05 01:01:18.858462 | orchestrator | Monday 05 January 2026 01:00:00 +0000 (0:00:52.249) 0:01:12.224 ******** 2026-01-05 01:01:18.858474 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:01:18.858486 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:01:18.858497 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:01:18.858509 | orchestrator | 2026-01-05 01:01:18.858522 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-05 01:01:18.858535 | orchestrator | Monday 05 January 2026 01:01:07 +0000 (0:01:07.311) 0:02:19.536 ******** 2026-01-05 01:01:18.858548 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:01:18.858556 | orchestrator | 2026-01-05 01:01:18.858564 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-01-05 01:01:18.858578 | orchestrator | Monday 05 January 2026 01:01:08 +0000 (0:00:00.528) 0:02:20.064 ******** 2026-01-05 01:01:18.858586 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:18.858603 | orchestrator | 2026-01-05 01:01:18.858610 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-01-05 01:01:18.858617 | orchestrator | Monday 05 January 2026 01:01:10 +0000 (0:00:02.906) 0:02:22.970 ******** 2026-01-05 01:01:18.858625 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:18.858632 | orchestrator | 2026-01-05 01:01:18.858639 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-01-05 01:01:18.858646 | orchestrator | Monday 05 January 2026 01:01:13 +0000 (0:00:02.376) 0:02:25.347 ******** 2026-01-05 01:01:18.858653 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:01:18.858660 | orchestrator | 2026-01-05 01:01:18.858668 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-01-05 01:01:18.858675 | orchestrator | Monday 05 January 2026 01:01:16 +0000 (0:00:02.808) 0:02:28.156 ******** 2026-01-05 01:01:18.858682 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:01:18.858689 | orchestrator | 2026-01-05 01:01:18.858696 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:01:18.858705 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-05 01:01:18.858715 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-05 01:01:18.858722 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-05 01:01:18.858729 | orchestrator | 2026-01-05 01:01:18.858736 | orchestrator | 2026-01-05 01:01:18.858744 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:01:18.858756 | orchestrator | Monday 05 January 2026 01:01:18 +0000 (0:00:02.691) 0:02:30.848 ******** 2026-01-05 01:01:18.858767 | orchestrator | =============================================================================== 2026-01-05 01:01:18.858784 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 67.31s 2026-01-05 01:01:18.858801 | orchestrator | opensearch : Restart opensearch container ------------------------------ 52.25s 2026-01-05 01:01:18.858871 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.91s 2026-01-05 01:01:18.858893 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.81s 2026-01-05 01:01:18.858906 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.69s 2026-01-05 01:01:18.858918 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.52s 2026-01-05 01:01:18.858931 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.40s 2026-01-05 01:01:18.858939 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.39s 2026-01-05 01:01:18.858947 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.38s 2026-01-05 01:01:18.858955 | orchestrator | opensearch : Check opensearch containers -------------------------------- 1.97s 2026-01-05 01:01:18.858962 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.84s 2026-01-05 01:01:18.858969 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 1.73s 2026-01-05 01:01:18.858976 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.70s 2026-01-05 01:01:18.858983 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.07s 2026-01-05 01:01:18.858991 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.81s 2026-01-05 01:01:18.858998 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.70s 2026-01-05 01:01:18.859015 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.54s 2026-01-05 01:01:19.103118 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.53s 2026-01-05 01:01:19.103213 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.51s 2026-01-05 01:01:19.103253 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.46s 2026-01-05 01:01:21.251459 | orchestrator | 2026-01-05 01:01:21 | INFO  | Task 1c29bc8c-f5c5-41e3-bc28-966c964a7425 (memcached) was prepared for execution. 2026-01-05 01:01:21.251578 | orchestrator | 2026-01-05 01:01:21 | INFO  | It takes a moment until task 1c29bc8c-f5c5-41e3-bc28-966c964a7425 (memcached) has been started and output is visible here. 2026-01-05 01:01:37.609315 | orchestrator | 2026-01-05 01:01:37.609415 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 01:01:37.609424 | orchestrator | 2026-01-05 01:01:37.609431 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 01:01:37.609438 | orchestrator | Monday 05 January 2026 01:01:25 +0000 (0:00:00.249) 0:00:00.249 ******** 2026-01-05 01:01:37.609444 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:37.609452 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:37.609458 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:37.609464 | orchestrator | 2026-01-05 01:01:37.609470 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 01:01:37.609477 | orchestrator | Monday 05 January 2026 01:01:25 +0000 (0:00:00.302) 0:00:00.551 ******** 2026-01-05 01:01:37.609484 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-01-05 01:01:37.609494 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-01-05 01:01:37.609504 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-01-05 01:01:37.609514 | orchestrator | 2026-01-05 01:01:37.609543 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-01-05 01:01:37.609554 | orchestrator | 2026-01-05 01:01:37.609564 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-01-05 01:01:37.609588 | orchestrator | Monday 05 January 2026 01:01:25 +0000 (0:00:00.375) 0:00:00.926 ******** 2026-01-05 01:01:37.609606 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:01:37.611181 | orchestrator | 2026-01-05 01:01:37.611219 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-01-05 01:01:37.611228 | orchestrator | Monday 05 January 2026 01:01:26 +0000 (0:00:00.450) 0:00:01.377 ******** 2026-01-05 01:01:37.611235 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-01-05 01:01:37.611241 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-01-05 01:01:37.611247 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-01-05 01:01:37.611253 | orchestrator | 2026-01-05 01:01:37.611260 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-01-05 01:01:37.611269 | orchestrator | Monday 05 January 2026 01:01:26 +0000 (0:00:00.680) 0:00:02.058 ******** 2026-01-05 01:01:37.611279 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-01-05 01:01:37.611288 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-01-05 01:01:37.611296 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-01-05 01:01:37.611304 | orchestrator | 2026-01-05 01:01:37.611313 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-01-05 01:01:37.611321 | orchestrator | Monday 05 January 2026 01:01:28 +0000 (0:00:01.578) 0:00:03.636 ******** 2026-01-05 01:01:37.611329 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:01:37.611339 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:01:37.611347 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:01:37.611360 | orchestrator | 2026-01-05 01:01:37.611374 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-01-05 01:01:37.611383 | orchestrator | Monday 05 January 2026 01:01:30 +0000 (0:00:01.537) 0:00:05.173 ******** 2026-01-05 01:01:37.611394 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:01:37.611403 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:01:37.611413 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:01:37.611421 | orchestrator | 2026-01-05 01:01:37.611458 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:01:37.611468 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:01:37.611479 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:01:37.611489 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:01:37.611498 | orchestrator | 2026-01-05 01:01:37.611507 | orchestrator | 2026-01-05 01:01:37.611517 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:01:37.611526 | orchestrator | Monday 05 January 2026 01:01:37 +0000 (0:00:07.079) 0:00:12.252 ******** 2026-01-05 01:01:37.611536 | orchestrator | =============================================================================== 2026-01-05 01:01:37.611545 | orchestrator | memcached : Restart memcached container --------------------------------- 7.08s 2026-01-05 01:01:37.611553 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.58s 2026-01-05 01:01:37.611563 | orchestrator | memcached : Check memcached container ----------------------------------- 1.54s 2026-01-05 01:01:37.611573 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.68s 2026-01-05 01:01:37.611583 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.45s 2026-01-05 01:01:37.611593 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.38s 2026-01-05 01:01:37.611602 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2026-01-05 01:01:40.243395 | orchestrator | 2026-01-05 01:01:40 | INFO  | Task c85278b9-00cc-4a61-a54c-46dfa2bd995d (redis) was prepared for execution. 2026-01-05 01:01:40.243489 | orchestrator | 2026-01-05 01:01:40 | INFO  | It takes a moment until task c85278b9-00cc-4a61-a54c-46dfa2bd995d (redis) has been started and output is visible here. 2026-01-05 01:01:49.326180 | orchestrator | 2026-01-05 01:01:49.326266 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 01:01:49.326273 | orchestrator | 2026-01-05 01:01:49.326278 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 01:01:49.326282 | orchestrator | Monday 05 January 2026 01:01:44 +0000 (0:00:00.260) 0:00:00.260 ******** 2026-01-05 01:01:49.326286 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:01:49.326301 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:01:49.326306 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:01:49.326310 | orchestrator | 2026-01-05 01:01:49.326314 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 01:01:49.326318 | orchestrator | Monday 05 January 2026 01:01:44 +0000 (0:00:00.308) 0:00:00.568 ******** 2026-01-05 01:01:49.326323 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-01-05 01:01:49.326327 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-01-05 01:01:49.326331 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-01-05 01:01:49.326335 | orchestrator | 2026-01-05 01:01:49.326339 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-01-05 01:01:49.326343 | orchestrator | 2026-01-05 01:01:49.326347 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-01-05 01:01:49.326351 | orchestrator | Monday 05 January 2026 01:01:45 +0000 (0:00:00.462) 0:00:01.031 ******** 2026-01-05 01:01:49.326355 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:01:49.326361 | orchestrator | 2026-01-05 01:01:49.326365 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-01-05 01:01:49.326369 | orchestrator | Monday 05 January 2026 01:01:45 +0000 (0:00:00.479) 0:00:01.511 ******** 2026-01-05 01:01:49.326376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-05 01:01:49.326486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-05 01:01:49.326502 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-05 01:01:49.326511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-05 01:01:49.326545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-05 01:01:49.326556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-05 01:01:49.326563 | orchestrator | 2026-01-05 01:01:49.326568 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-01-05 01:01:49.326581 | orchestrator | Monday 05 January 2026 01:01:46 +0000 (0:00:01.164) 0:00:02.675 ******** 2026-01-05 01:01:49.326587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-05 01:01:49.326594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-05 01:01:49.326601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-05 01:01:49.326607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-05 01:01:49.326618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-05 01:01:53.746343 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-05 01:01:53.746470 | orchestrator | 2026-01-05 01:01:53.746494 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-01-05 01:01:53.746501 | orchestrator | Monday 05 January 2026 01:01:49 +0000 (0:00:02.348) 0:00:05.024 ******** 2026-01-05 01:01:53.746509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-05 01:01:53.746516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-05 01:01:53.746521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-05 01:01:53.746526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-05 01:01:53.746532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-05 01:01:53.746550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-05 01:01:53.746559 | orchestrator | 2026-01-05 01:01:53.746564 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-01-05 01:01:53.746569 | orchestrator | Monday 05 January 2026 01:01:51 +0000 (0:00:02.598) 0:00:07.623 ******** 2026-01-05 01:01:53.746578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-05 01:01:53.746584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-05 01:01:53.746589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-05 01:01:53.746594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-05 01:01:53.746599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-05 01:01:53.746609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-05 01:02:08.553838 | orchestrator | 2026-01-05 01:02:08.553949 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-05 01:02:08.553960 | orchestrator | Monday 05 January 2026 01:01:53 +0000 (0:00:01.576) 0:00:09.200 ******** 2026-01-05 01:02:08.553967 | orchestrator | 2026-01-05 01:02:08.553974 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-05 01:02:08.553998 | orchestrator | Monday 05 January 2026 01:01:53 +0000 (0:00:00.063) 0:00:09.263 ******** 2026-01-05 01:02:08.554005 | orchestrator | 2026-01-05 01:02:08.554062 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-05 01:02:08.554069 | orchestrator | Monday 05 January 2026 01:01:53 +0000 (0:00:00.096) 0:00:09.360 ******** 2026-01-05 01:02:08.554075 | orchestrator | 2026-01-05 01:02:08.554081 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-01-05 01:02:08.554087 | orchestrator | Monday 05 January 2026 01:01:53 +0000 (0:00:00.081) 0:00:09.442 ******** 2026-01-05 01:02:08.554094 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:02:08.554101 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:02:08.554107 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:02:08.554114 | orchestrator | 2026-01-05 01:02:08.554120 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-01-05 01:02:08.554126 | orchestrator | Monday 05 January 2026 01:02:00 +0000 (0:00:06.669) 0:00:16.112 ******** 2026-01-05 01:02:08.554132 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:02:08.554138 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:02:08.554144 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:02:08.554151 | orchestrator | 2026-01-05 01:02:08.554157 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:02:08.554163 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:02:08.554172 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:02:08.554178 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:02:08.554184 | orchestrator | 2026-01-05 01:02:08.554190 | orchestrator | 2026-01-05 01:02:08.554196 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:02:08.554202 | orchestrator | Monday 05 January 2026 01:02:08 +0000 (0:00:07.919) 0:00:24.031 ******** 2026-01-05 01:02:08.554209 | orchestrator | =============================================================================== 2026-01-05 01:02:08.554215 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 7.92s 2026-01-05 01:02:08.554221 | orchestrator | redis : Restart redis container ----------------------------------------- 6.67s 2026-01-05 01:02:08.554228 | orchestrator | redis : Copying over redis config files --------------------------------- 2.60s 2026-01-05 01:02:08.554234 | orchestrator | redis : Copying over default config.json files -------------------------- 2.35s 2026-01-05 01:02:08.554241 | orchestrator | redis : Check redis containers ------------------------------------------ 1.58s 2026-01-05 01:02:08.554247 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.16s 2026-01-05 01:02:08.554254 | orchestrator | redis : include_tasks --------------------------------------------------- 0.48s 2026-01-05 01:02:08.554260 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.46s 2026-01-05 01:02:08.554266 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2026-01-05 01:02:08.554273 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.24s 2026-01-05 01:02:10.606457 | orchestrator | 2026-01-05 01:02:10 | INFO  | Task 4833ecc2-6450-48e3-80a0-1b5893f5d655 (mariadb) was prepared for execution. 2026-01-05 01:02:10.606557 | orchestrator | 2026-01-05 01:02:10 | INFO  | It takes a moment until task 4833ecc2-6450-48e3-80a0-1b5893f5d655 (mariadb) has been started and output is visible here. 2026-01-05 01:02:22.944191 | orchestrator | 2026-01-05 01:02:22.944314 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 01:02:22.944328 | orchestrator | 2026-01-05 01:02:22.944336 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 01:02:22.944344 | orchestrator | Monday 05 January 2026 01:02:14 +0000 (0:00:00.152) 0:00:00.152 ******** 2026-01-05 01:02:22.944351 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:02:22.944360 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:02:22.944367 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:02:22.944374 | orchestrator | 2026-01-05 01:02:22.944381 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 01:02:22.944388 | orchestrator | Monday 05 January 2026 01:02:14 +0000 (0:00:00.265) 0:00:00.418 ******** 2026-01-05 01:02:22.944395 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-01-05 01:02:22.944402 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-01-05 01:02:22.944408 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-01-05 01:02:22.944415 | orchestrator | 2026-01-05 01:02:22.944422 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-01-05 01:02:22.944429 | orchestrator | 2026-01-05 01:02:22.944435 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-01-05 01:02:22.944442 | orchestrator | Monday 05 January 2026 01:02:15 +0000 (0:00:00.448) 0:00:00.866 ******** 2026-01-05 01:02:22.944449 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-05 01:02:22.944456 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-01-05 01:02:22.944463 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-01-05 01:02:22.944470 | orchestrator | 2026-01-05 01:02:22.944477 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-05 01:02:22.944483 | orchestrator | Monday 05 January 2026 01:02:15 +0000 (0:00:00.361) 0:00:01.227 ******** 2026-01-05 01:02:22.944491 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:02:22.944499 | orchestrator | 2026-01-05 01:02:22.944521 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-01-05 01:02:22.944529 | orchestrator | Monday 05 January 2026 01:02:15 +0000 (0:00:00.458) 0:00:01.686 ******** 2026-01-05 01:02:22.944541 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-05 01:02:22.944587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-05 01:02:22.944601 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-05 01:02:22.944615 | orchestrator | 2026-01-05 01:02:22.944622 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-01-05 01:02:22.944629 | orchestrator | Monday 05 January 2026 01:02:18 +0000 (0:00:02.289) 0:00:03.975 ******** 2026-01-05 01:02:22.944637 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:02:22.944649 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:02:22.944660 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:02:22.944672 | orchestrator | 2026-01-05 01:02:22.944683 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-01-05 01:02:22.944695 | orchestrator | Monday 05 January 2026 01:02:18 +0000 (0:00:00.542) 0:00:04.518 ******** 2026-01-05 01:02:22.944706 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:02:22.944830 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:02:22.944851 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:02:22.944864 | orchestrator | 2026-01-05 01:02:22.944875 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-01-05 01:02:22.944887 | orchestrator | Monday 05 January 2026 01:02:20 +0000 (0:00:01.368) 0:00:05.886 ******** 2026-01-05 01:02:22.944912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-05 01:02:30.903032 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-05 01:02:30.903199 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-05 01:02:30.903222 | orchestrator | 2026-01-05 01:02:30.903241 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-01-05 01:02:30.903256 | orchestrator | Monday 05 January 2026 01:02:22 +0000 (0:00:02.782) 0:00:08.669 ******** 2026-01-05 01:02:30.903270 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:02:30.903285 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:02:30.903307 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:02:30.903322 | orchestrator | 2026-01-05 01:02:30.903336 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-01-05 01:02:30.903371 | orchestrator | Monday 05 January 2026 01:02:24 +0000 (0:00:01.197) 0:00:09.866 ******** 2026-01-05 01:02:30.903395 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:02:30.903409 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:02:30.903423 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:02:30.903436 | orchestrator | 2026-01-05 01:02:30.903449 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-05 01:02:30.903463 | orchestrator | Monday 05 January 2026 01:02:27 +0000 (0:00:03.822) 0:00:13.688 ******** 2026-01-05 01:02:30.903478 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:02:30.903506 | orchestrator | 2026-01-05 01:02:30.903520 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-01-05 01:02:30.903534 | orchestrator | Monday 05 January 2026 01:02:28 +0000 (0:00:00.526) 0:00:14.215 ******** 2026-01-05 01:02:30.903550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 01:02:30.903565 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:02:30.903599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 01:02:35.687415 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:02:35.687551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 01:02:35.687565 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:02:35.687574 | orchestrator | 2026-01-05 01:02:35.687585 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-01-05 01:02:35.687595 | orchestrator | Monday 05 January 2026 01:02:30 +0000 (0:00:02.415) 0:00:16.631 ******** 2026-01-05 01:02:35.687604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 01:02:35.687631 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:02:35.687659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 01:02:35.687669 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:02:35.687676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 01:02:35.687684 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:02:35.687691 | orchestrator | 2026-01-05 01:02:35.687771 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-01-05 01:02:35.687779 | orchestrator | Monday 05 January 2026 01:02:33 +0000 (0:00:02.456) 0:00:19.087 ******** 2026-01-05 01:02:35.687803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 01:02:38.531242 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:02:38.531362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 01:02:38.531381 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:02:38.531411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 01:02:38.531450 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:02:38.531461 | orchestrator | 2026-01-05 01:02:38.531474 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-01-05 01:02:38.531487 | orchestrator | Monday 05 January 2026 01:02:35 +0000 (0:00:02.326) 0:00:21.414 ******** 2026-01-05 01:02:38.531518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-05 01:02:38.531536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-05 01:02:38.531564 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-05 01:04:48.325537 | orchestrator | 2026-01-05 01:04:48.325702 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-01-05 01:04:48.325729 | orchestrator | Monday 05 January 2026 01:02:38 +0000 (0:00:02.847) 0:00:24.261 ******** 2026-01-05 01:04:48.325739 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:04:48.325754 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:04:48.325768 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:04:48.325780 | orchestrator | 2026-01-05 01:04:48.325795 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-01-05 01:04:48.325835 | orchestrator | Monday 05 January 2026 01:02:39 +0000 (0:00:00.863) 0:00:25.124 ******** 2026-01-05 01:04:48.325850 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:04:48.325865 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:04:48.325879 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:04:48.325892 | orchestrator | 2026-01-05 01:04:48.325904 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-01-05 01:04:48.325913 | orchestrator | Monday 05 January 2026 01:02:39 +0000 (0:00:00.540) 0:00:25.665 ******** 2026-01-05 01:04:48.325921 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:04:48.325929 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:04:48.325937 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:04:48.325944 | orchestrator | 2026-01-05 01:04:48.325953 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-01-05 01:04:48.325961 | orchestrator | Monday 05 January 2026 01:02:40 +0000 (0:00:00.344) 0:00:26.010 ******** 2026-01-05 01:04:48.325971 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-01-05 01:04:48.325980 | orchestrator | ...ignoring 2026-01-05 01:04:48.326003 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-01-05 01:04:48.326077 | orchestrator | ...ignoring 2026-01-05 01:04:48.326090 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-01-05 01:04:48.326100 | orchestrator | ...ignoring 2026-01-05 01:04:48.326109 | orchestrator | 2026-01-05 01:04:48.326118 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-01-05 01:04:48.326128 | orchestrator | Monday 05 January 2026 01:02:51 +0000 (0:00:10.913) 0:00:36.923 ******** 2026-01-05 01:04:48.326138 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:04:48.326147 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:04:48.326156 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:04:48.326165 | orchestrator | 2026-01-05 01:04:48.326174 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-01-05 01:04:48.326184 | orchestrator | Monday 05 January 2026 01:02:51 +0000 (0:00:00.456) 0:00:37.379 ******** 2026-01-05 01:04:48.326194 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:04:48.326204 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:04:48.326214 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:04:48.326222 | orchestrator | 2026-01-05 01:04:48.326230 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-01-05 01:04:48.326238 | orchestrator | Monday 05 January 2026 01:02:52 +0000 (0:00:00.670) 0:00:38.050 ******** 2026-01-05 01:04:48.326246 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:04:48.326254 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:04:48.326262 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:04:48.326270 | orchestrator | 2026-01-05 01:04:48.326277 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-01-05 01:04:48.326285 | orchestrator | Monday 05 January 2026 01:02:52 +0000 (0:00:00.429) 0:00:38.479 ******** 2026-01-05 01:04:48.326293 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:04:48.326301 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:04:48.326309 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:04:48.326317 | orchestrator | 2026-01-05 01:04:48.326324 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-01-05 01:04:48.326332 | orchestrator | Monday 05 January 2026 01:02:53 +0000 (0:00:00.475) 0:00:38.955 ******** 2026-01-05 01:04:48.326340 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:04:48.326348 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:04:48.326356 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:04:48.326364 | orchestrator | 2026-01-05 01:04:48.326371 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-01-05 01:04:48.326392 | orchestrator | Monday 05 January 2026 01:02:53 +0000 (0:00:00.423) 0:00:39.379 ******** 2026-01-05 01:04:48.326400 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:04:48.326409 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:04:48.326417 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:04:48.326425 | orchestrator | 2026-01-05 01:04:48.326432 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-05 01:04:48.326440 | orchestrator | Monday 05 January 2026 01:02:54 +0000 (0:00:00.709) 0:00:40.088 ******** 2026-01-05 01:04:48.326448 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:04:48.326456 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:04:48.326464 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-01-05 01:04:48.326491 | orchestrator | 2026-01-05 01:04:48.326499 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-01-05 01:04:48.326507 | orchestrator | Monday 05 January 2026 01:02:54 +0000 (0:00:00.421) 0:00:40.510 ******** 2026-01-05 01:04:48.326515 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:04:48.326523 | orchestrator | 2026-01-05 01:04:48.326531 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-01-05 01:04:48.326539 | orchestrator | Monday 05 January 2026 01:03:05 +0000 (0:00:10.613) 0:00:51.123 ******** 2026-01-05 01:04:48.326547 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:04:48.326555 | orchestrator | 2026-01-05 01:04:48.326562 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-05 01:04:48.326570 | orchestrator | Monday 05 January 2026 01:03:05 +0000 (0:00:00.156) 0:00:51.280 ******** 2026-01-05 01:04:48.326579 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:04:48.326607 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:04:48.326615 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:04:48.326623 | orchestrator | 2026-01-05 01:04:48.326631 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-01-05 01:04:48.326639 | orchestrator | Monday 05 January 2026 01:03:06 +0000 (0:00:01.030) 0:00:52.311 ******** 2026-01-05 01:04:48.326647 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:04:48.326655 | orchestrator | 2026-01-05 01:04:48.326663 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-01-05 01:04:48.326671 | orchestrator | Monday 05 January 2026 01:03:14 +0000 (0:00:08.139) 0:01:00.450 ******** 2026-01-05 01:04:48.326678 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:04:48.326686 | orchestrator | 2026-01-05 01:04:48.326694 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-01-05 01:04:48.326702 | orchestrator | Monday 05 January 2026 01:03:16 +0000 (0:00:01.606) 0:01:02.057 ******** 2026-01-05 01:04:48.326710 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:04:48.326718 | orchestrator | 2026-01-05 01:04:48.326726 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-01-05 01:04:48.326734 | orchestrator | Monday 05 January 2026 01:03:18 +0000 (0:00:02.565) 0:01:04.623 ******** 2026-01-05 01:04:48.326741 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:04:48.326749 | orchestrator | 2026-01-05 01:04:48.326759 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-01-05 01:04:48.326772 | orchestrator | Monday 05 January 2026 01:03:19 +0000 (0:00:00.122) 0:01:04.746 ******** 2026-01-05 01:04:48.326785 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:04:48.326797 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:04:48.326810 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:04:48.326823 | orchestrator | 2026-01-05 01:04:48.326843 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-01-05 01:04:48.326852 | orchestrator | Monday 05 January 2026 01:03:19 +0000 (0:00:00.336) 0:01:05.082 ******** 2026-01-05 01:04:48.326860 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:04:48.326868 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-01-05 01:04:48.326875 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:04:48.326891 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:04:48.326903 | orchestrator | 2026-01-05 01:04:48.326911 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-01-05 01:04:48.326919 | orchestrator | skipping: no hosts matched 2026-01-05 01:04:48.326927 | orchestrator | 2026-01-05 01:04:48.326935 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-01-05 01:04:48.326943 | orchestrator | 2026-01-05 01:04:48.326950 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-05 01:04:48.326958 | orchestrator | Monday 05 January 2026 01:03:19 +0000 (0:00:00.565) 0:01:05.647 ******** 2026-01-05 01:04:48.326966 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:04:48.326974 | orchestrator | 2026-01-05 01:04:48.326982 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-05 01:04:48.326990 | orchestrator | Monday 05 January 2026 01:03:40 +0000 (0:00:20.800) 0:01:26.448 ******** 2026-01-05 01:04:48.326997 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:04:48.327005 | orchestrator | 2026-01-05 01:04:48.327013 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-05 01:04:48.327021 | orchestrator | Monday 05 January 2026 01:03:51 +0000 (0:00:10.626) 0:01:37.075 ******** 2026-01-05 01:04:48.327029 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:04:48.327037 | orchestrator | 2026-01-05 01:04:48.327045 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-01-05 01:04:48.327052 | orchestrator | 2026-01-05 01:04:48.327060 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-05 01:04:48.327068 | orchestrator | Monday 05 January 2026 01:03:53 +0000 (0:00:02.524) 0:01:39.599 ******** 2026-01-05 01:04:48.327076 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:04:48.327084 | orchestrator | 2026-01-05 01:04:48.327092 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-05 01:04:48.327099 | orchestrator | Monday 05 January 2026 01:04:09 +0000 (0:00:15.877) 0:01:55.477 ******** 2026-01-05 01:04:48.327107 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:04:48.327115 | orchestrator | 2026-01-05 01:04:48.327123 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-05 01:04:48.327131 | orchestrator | Monday 05 January 2026 01:04:25 +0000 (0:00:15.612) 0:02:11.089 ******** 2026-01-05 01:04:48.327138 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:04:48.327146 | orchestrator | 2026-01-05 01:04:48.327154 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-01-05 01:04:48.327162 | orchestrator | 2026-01-05 01:04:48.327170 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-05 01:04:48.327178 | orchestrator | Monday 05 January 2026 01:04:27 +0000 (0:00:02.582) 0:02:13.672 ******** 2026-01-05 01:04:48.327186 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:04:48.327194 | orchestrator | 2026-01-05 01:04:48.327202 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-05 01:04:48.327209 | orchestrator | Monday 05 January 2026 01:04:40 +0000 (0:00:12.449) 0:02:26.122 ******** 2026-01-05 01:04:48.327217 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:04:48.327225 | orchestrator | 2026-01-05 01:04:48.327233 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-05 01:04:48.327241 | orchestrator | Monday 05 January 2026 01:04:44 +0000 (0:00:04.581) 0:02:30.704 ******** 2026-01-05 01:04:48.327252 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:04:48.327265 | orchestrator | 2026-01-05 01:04:48.327279 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-01-05 01:04:48.327292 | orchestrator | 2026-01-05 01:04:48.327300 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-01-05 01:04:48.327308 | orchestrator | Monday 05 January 2026 01:04:47 +0000 (0:00:02.666) 0:02:33.370 ******** 2026-01-05 01:04:48.327316 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:04:48.327324 | orchestrator | 2026-01-05 01:04:48.327332 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-01-05 01:04:48.327354 | orchestrator | Monday 05 January 2026 01:04:48 +0000 (0:00:00.680) 0:02:34.050 ******** 2026-01-05 01:05:01.902369 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:05:01.902609 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:05:01.902628 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:05:01.902636 | orchestrator | 2026-01-05 01:05:01.902645 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-01-05 01:05:01.902654 | orchestrator | Monday 05 January 2026 01:04:50 +0000 (0:00:02.441) 0:02:36.492 ******** 2026-01-05 01:05:01.902661 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:05:01.902668 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:05:01.902675 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:05:01.902681 | orchestrator | 2026-01-05 01:05:01.902688 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-01-05 01:05:01.902694 | orchestrator | Monday 05 January 2026 01:04:53 +0000 (0:00:02.404) 0:02:38.897 ******** 2026-01-05 01:05:01.902701 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:05:01.902708 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:05:01.902715 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:05:01.902721 | orchestrator | 2026-01-05 01:05:01.902727 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-01-05 01:05:01.902734 | orchestrator | Monday 05 January 2026 01:04:55 +0000 (0:00:02.540) 0:02:41.438 ******** 2026-01-05 01:05:01.902741 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:05:01.902747 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:05:01.902754 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:05:01.902761 | orchestrator | 2026-01-05 01:05:01.902767 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-01-05 01:05:01.902774 | orchestrator | Monday 05 January 2026 01:04:58 +0000 (0:00:02.347) 0:02:43.785 ******** 2026-01-05 01:05:01.902781 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:05:01.902788 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:05:01.902813 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:05:01.902820 | orchestrator | 2026-01-05 01:05:01.902826 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-01-05 01:05:01.902834 | orchestrator | Monday 05 January 2026 01:05:01 +0000 (0:00:03.072) 0:02:46.857 ******** 2026-01-05 01:05:01.902840 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:05:01.902846 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:05:01.902853 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:05:01.902861 | orchestrator | 2026-01-05 01:05:01.902867 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:05:01.902875 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-01-05 01:05:01.902884 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-01-05 01:05:01.902891 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-01-05 01:05:01.902897 | orchestrator | 2026-01-05 01:05:01.902903 | orchestrator | 2026-01-05 01:05:01.902909 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:05:01.902915 | orchestrator | Monday 05 January 2026 01:05:01 +0000 (0:00:00.245) 0:02:47.103 ******** 2026-01-05 01:05:01.902921 | orchestrator | =============================================================================== 2026-01-05 01:05:01.902926 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 36.68s 2026-01-05 01:05:01.902932 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 26.24s 2026-01-05 01:05:01.902938 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 12.45s 2026-01-05 01:05:01.902943 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.91s 2026-01-05 01:05:01.902974 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.61s 2026-01-05 01:05:01.902981 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.14s 2026-01-05 01:05:01.902987 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.11s 2026-01-05 01:05:01.902993 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.58s 2026-01-05 01:05:01.902999 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.82s 2026-01-05 01:05:01.903008 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.07s 2026-01-05 01:05:01.903014 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 2.85s 2026-01-05 01:05:01.903020 | orchestrator | mariadb : Copying over config.json files for services ------------------- 2.78s 2026-01-05 01:05:01.903027 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.67s 2026-01-05 01:05:01.903033 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.57s 2026-01-05 01:05:01.903039 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.54s 2026-01-05 01:05:01.903045 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.46s 2026-01-05 01:05:01.903052 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.44s 2026-01-05 01:05:01.903059 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.42s 2026-01-05 01:05:01.903065 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.40s 2026-01-05 01:05:01.903071 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.35s 2026-01-05 01:05:04.475053 | orchestrator | 2026-01-05 01:05:04 | INFO  | Task c36e3488-2fdf-4283-a0f7-a9913d10bec2 (rabbitmq) was prepared for execution. 2026-01-05 01:05:04.475144 | orchestrator | 2026-01-05 01:05:04 | INFO  | It takes a moment until task c36e3488-2fdf-4283-a0f7-a9913d10bec2 (rabbitmq) has been started and output is visible here. 2026-01-05 01:05:17.510910 | orchestrator | 2026-01-05 01:05:17.511000 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 01:05:17.511009 | orchestrator | 2026-01-05 01:05:17.511017 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 01:05:17.511024 | orchestrator | Monday 05 January 2026 01:05:08 +0000 (0:00:00.174) 0:00:00.174 ******** 2026-01-05 01:05:17.511031 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:05:17.511039 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:05:17.511045 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:05:17.511052 | orchestrator | 2026-01-05 01:05:17.511058 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 01:05:17.511064 | orchestrator | Monday 05 January 2026 01:05:09 +0000 (0:00:00.296) 0:00:00.470 ******** 2026-01-05 01:05:17.511070 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-01-05 01:05:17.511077 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-01-05 01:05:17.511084 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-01-05 01:05:17.511090 | orchestrator | 2026-01-05 01:05:17.511096 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-01-05 01:05:17.511103 | orchestrator | 2026-01-05 01:05:17.511109 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-05 01:05:17.511116 | orchestrator | Monday 05 January 2026 01:05:09 +0000 (0:00:00.488) 0:00:00.958 ******** 2026-01-05 01:05:17.511138 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:05:17.511146 | orchestrator | 2026-01-05 01:05:17.511152 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-01-05 01:05:17.511158 | orchestrator | Monday 05 January 2026 01:05:10 +0000 (0:00:00.489) 0:00:01.448 ******** 2026-01-05 01:05:17.511165 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:05:17.511191 | orchestrator | 2026-01-05 01:05:17.511197 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-01-05 01:05:17.511203 | orchestrator | Monday 05 January 2026 01:05:11 +0000 (0:00:00.963) 0:00:02.411 ******** 2026-01-05 01:05:17.511210 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:05:17.511217 | orchestrator | 2026-01-05 01:05:17.511223 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-01-05 01:05:17.511230 | orchestrator | Monday 05 January 2026 01:05:11 +0000 (0:00:00.328) 0:00:02.740 ******** 2026-01-05 01:05:17.511236 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:05:17.511242 | orchestrator | 2026-01-05 01:05:17.511247 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-01-05 01:05:17.511254 | orchestrator | Monday 05 January 2026 01:05:11 +0000 (0:00:00.355) 0:00:03.095 ******** 2026-01-05 01:05:17.511259 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:05:17.511265 | orchestrator | 2026-01-05 01:05:17.511271 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-01-05 01:05:17.511278 | orchestrator | Monday 05 January 2026 01:05:12 +0000 (0:00:00.365) 0:00:03.461 ******** 2026-01-05 01:05:17.511284 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:05:17.511290 | orchestrator | 2026-01-05 01:05:17.511296 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-05 01:05:17.511302 | orchestrator | Monday 05 January 2026 01:05:12 +0000 (0:00:00.464) 0:00:03.926 ******** 2026-01-05 01:05:17.511309 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:05:17.511316 | orchestrator | 2026-01-05 01:05:17.511322 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-01-05 01:05:17.511328 | orchestrator | Monday 05 January 2026 01:05:13 +0000 (0:00:00.791) 0:00:04.718 ******** 2026-01-05 01:05:17.511334 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:05:17.511340 | orchestrator | 2026-01-05 01:05:17.511345 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-01-05 01:05:17.511352 | orchestrator | Monday 05 January 2026 01:05:14 +0000 (0:00:00.956) 0:00:05.674 ******** 2026-01-05 01:05:17.511359 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:05:17.511365 | orchestrator | 2026-01-05 01:05:17.511370 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-01-05 01:05:17.511376 | orchestrator | Monday 05 January 2026 01:05:14 +0000 (0:00:00.371) 0:00:06.046 ******** 2026-01-05 01:05:17.511382 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:05:17.511388 | orchestrator | 2026-01-05 01:05:17.511394 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-01-05 01:05:17.511399 | orchestrator | Monday 05 January 2026 01:05:15 +0000 (0:00:00.362) 0:00:06.408 ******** 2026-01-05 01:05:17.511453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-05 01:05:17.511469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-05 01:05:17.511484 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-05 01:05:17.511490 | orchestrator | 2026-01-05 01:05:17.511496 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-01-05 01:05:17.511503 | orchestrator | Monday 05 January 2026 01:05:15 +0000 (0:00:00.813) 0:00:07.221 ******** 2026-01-05 01:05:17.511509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-05 01:05:17.511523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-05 01:05:35.527880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-05 01:05:35.527988 | orchestrator | 2026-01-05 01:05:35.527998 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-01-05 01:05:35.528003 | orchestrator | Monday 05 January 2026 01:05:17 +0000 (0:00:01.649) 0:00:08.871 ******** 2026-01-05 01:05:35.528007 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-05 01:05:35.528013 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-05 01:05:35.528017 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-05 01:05:35.528021 | orchestrator | 2026-01-05 01:05:35.528025 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-01-05 01:05:35.528029 | orchestrator | Monday 05 January 2026 01:05:18 +0000 (0:00:01.374) 0:00:10.245 ******** 2026-01-05 01:05:35.528033 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-05 01:05:35.528038 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-05 01:05:35.528041 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-05 01:05:35.528045 | orchestrator | 2026-01-05 01:05:35.528049 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-01-05 01:05:35.528054 | orchestrator | Monday 05 January 2026 01:05:20 +0000 (0:00:01.676) 0:00:11.921 ******** 2026-01-05 01:05:35.528058 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-05 01:05:35.528062 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-05 01:05:35.528066 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-05 01:05:35.528069 | orchestrator | 2026-01-05 01:05:35.528073 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-01-05 01:05:35.528077 | orchestrator | Monday 05 January 2026 01:05:21 +0000 (0:00:01.357) 0:00:13.279 ******** 2026-01-05 01:05:35.528081 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-05 01:05:35.528084 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-05 01:05:35.528088 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-05 01:05:35.528108 | orchestrator | 2026-01-05 01:05:35.528112 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-01-05 01:05:35.528116 | orchestrator | Monday 05 January 2026 01:05:23 +0000 (0:00:01.577) 0:00:14.856 ******** 2026-01-05 01:05:35.528120 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-05 01:05:35.528123 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-05 01:05:35.528127 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-05 01:05:35.528131 | orchestrator | 2026-01-05 01:05:35.528135 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-01-05 01:05:35.528139 | orchestrator | Monday 05 January 2026 01:05:24 +0000 (0:00:01.416) 0:00:16.272 ******** 2026-01-05 01:05:35.528142 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-05 01:05:35.528146 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-05 01:05:35.528150 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-05 01:05:35.528154 | orchestrator | 2026-01-05 01:05:35.528157 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-05 01:05:35.528161 | orchestrator | Monday 05 January 2026 01:05:26 +0000 (0:00:01.441) 0:00:17.713 ******** 2026-01-05 01:05:35.528166 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:05:35.528171 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:05:35.528186 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:05:35.528190 | orchestrator | 2026-01-05 01:05:35.528194 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-01-05 01:05:35.528197 | orchestrator | Monday 05 January 2026 01:05:26 +0000 (0:00:00.415) 0:00:18.129 ******** 2026-01-05 01:05:35.528206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-05 01:05:35.528210 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-05 01:05:35.528220 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-05 01:05:35.528224 | orchestrator | 2026-01-05 01:05:35.528228 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-01-05 01:05:35.528232 | orchestrator | Monday 05 January 2026 01:05:27 +0000 (0:00:01.237) 0:00:19.366 ******** 2026-01-05 01:05:35.528236 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:05:35.528239 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:05:35.528243 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:05:35.528247 | orchestrator | 2026-01-05 01:05:35.528251 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-01-05 01:05:35.528255 | orchestrator | Monday 05 January 2026 01:05:28 +0000 (0:00:00.884) 0:00:20.250 ******** 2026-01-05 01:05:35.528258 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:05:35.528262 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:05:35.528266 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:05:35.528270 | orchestrator | 2026-01-05 01:05:35.528274 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-01-05 01:05:35.528280 | orchestrator | Monday 05 January 2026 01:05:35 +0000 (0:00:06.629) 0:00:26.880 ******** 2026-01-05 01:07:17.733687 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:07:17.733771 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:07:17.733777 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:07:17.733783 | orchestrator | 2026-01-05 01:07:17.733789 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-05 01:07:17.733794 | orchestrator | 2026-01-05 01:07:17.733798 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-05 01:07:17.733802 | orchestrator | Monday 05 January 2026 01:05:36 +0000 (0:00:00.523) 0:00:27.404 ******** 2026-01-05 01:07:17.733806 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:07:17.733811 | orchestrator | 2026-01-05 01:07:17.733815 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-05 01:07:17.733819 | orchestrator | Monday 05 January 2026 01:05:36 +0000 (0:00:00.651) 0:00:28.055 ******** 2026-01-05 01:07:17.733823 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:07:17.733839 | orchestrator | 2026-01-05 01:07:17.733843 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-05 01:07:17.733846 | orchestrator | Monday 05 January 2026 01:05:36 +0000 (0:00:00.233) 0:00:28.289 ******** 2026-01-05 01:07:17.733850 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:07:17.733854 | orchestrator | 2026-01-05 01:07:17.733858 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-05 01:07:17.733861 | orchestrator | Monday 05 January 2026 01:05:38 +0000 (0:00:01.659) 0:00:29.948 ******** 2026-01-05 01:07:17.733865 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:07:17.733869 | orchestrator | 2026-01-05 01:07:17.733873 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-05 01:07:17.733877 | orchestrator | 2026-01-05 01:07:17.733895 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-05 01:07:17.733899 | orchestrator | Monday 05 January 2026 01:06:35 +0000 (0:00:57.272) 0:01:27.220 ******** 2026-01-05 01:07:17.733902 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:07:17.733906 | orchestrator | 2026-01-05 01:07:17.733910 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-05 01:07:17.733914 | orchestrator | Monday 05 January 2026 01:06:36 +0000 (0:00:00.619) 0:01:27.840 ******** 2026-01-05 01:07:17.733917 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:07:17.733921 | orchestrator | 2026-01-05 01:07:17.733925 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-05 01:07:17.733929 | orchestrator | Monday 05 January 2026 01:06:36 +0000 (0:00:00.206) 0:01:28.047 ******** 2026-01-05 01:07:17.733932 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:07:17.733937 | orchestrator | 2026-01-05 01:07:17.733941 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-05 01:07:17.733944 | orchestrator | Monday 05 January 2026 01:06:38 +0000 (0:00:01.649) 0:01:29.697 ******** 2026-01-05 01:07:17.733948 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:07:17.733952 | orchestrator | 2026-01-05 01:07:17.733956 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-05 01:07:17.733960 | orchestrator | 2026-01-05 01:07:17.733963 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-05 01:07:17.733967 | orchestrator | Monday 05 January 2026 01:06:54 +0000 (0:00:15.789) 0:01:45.486 ******** 2026-01-05 01:07:17.733971 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:07:17.733975 | orchestrator | 2026-01-05 01:07:17.733978 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-05 01:07:17.733982 | orchestrator | Monday 05 January 2026 01:06:54 +0000 (0:00:00.799) 0:01:46.286 ******** 2026-01-05 01:07:17.733986 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:07:17.733990 | orchestrator | 2026-01-05 01:07:17.733993 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-05 01:07:17.733997 | orchestrator | Monday 05 January 2026 01:06:55 +0000 (0:00:00.313) 0:01:46.600 ******** 2026-01-05 01:07:17.734001 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:07:17.734005 | orchestrator | 2026-01-05 01:07:17.734008 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-05 01:07:17.734046 | orchestrator | Monday 05 January 2026 01:07:01 +0000 (0:00:06.709) 0:01:53.309 ******** 2026-01-05 01:07:17.734050 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:07:17.734054 | orchestrator | 2026-01-05 01:07:17.734058 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-01-05 01:07:17.734062 | orchestrator | 2026-01-05 01:07:17.734066 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-01-05 01:07:17.734069 | orchestrator | Monday 05 January 2026 01:07:14 +0000 (0:00:12.416) 0:02:05.725 ******** 2026-01-05 01:07:17.734073 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:07:17.734077 | orchestrator | 2026-01-05 01:07:17.734081 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-01-05 01:07:17.734084 | orchestrator | Monday 05 January 2026 01:07:14 +0000 (0:00:00.535) 0:02:06.261 ******** 2026-01-05 01:07:17.734088 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-01-05 01:07:17.734092 | orchestrator | enable_outward_rabbitmq_True 2026-01-05 01:07:17.734096 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-01-05 01:07:17.734100 | orchestrator | outward_rabbitmq_restart 2026-01-05 01:07:17.734104 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:07:17.734107 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:07:17.734111 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:07:17.734115 | orchestrator | 2026-01-05 01:07:17.734119 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-01-05 01:07:17.734122 | orchestrator | skipping: no hosts matched 2026-01-05 01:07:17.734130 | orchestrator | 2026-01-05 01:07:17.734134 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-01-05 01:07:17.734138 | orchestrator | skipping: no hosts matched 2026-01-05 01:07:17.734142 | orchestrator | 2026-01-05 01:07:17.734145 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-01-05 01:07:17.734149 | orchestrator | skipping: no hosts matched 2026-01-05 01:07:17.734153 | orchestrator | 2026-01-05 01:07:17.734157 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:07:17.734172 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-01-05 01:07:17.734178 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 01:07:17.734182 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 01:07:17.734185 | orchestrator | 2026-01-05 01:07:17.734189 | orchestrator | 2026-01-05 01:07:17.734193 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:07:17.734197 | orchestrator | Monday 05 January 2026 01:07:17 +0000 (0:00:02.494) 0:02:08.755 ******** 2026-01-05 01:07:17.734204 | orchestrator | =============================================================================== 2026-01-05 01:07:17.734208 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 85.48s 2026-01-05 01:07:17.734212 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 10.02s 2026-01-05 01:07:17.734274 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 6.63s 2026-01-05 01:07:17.734278 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.49s 2026-01-05 01:07:17.734283 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.07s 2026-01-05 01:07:17.734288 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 1.68s 2026-01-05 01:07:17.734292 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.65s 2026-01-05 01:07:17.734296 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.58s 2026-01-05 01:07:17.734301 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.44s 2026-01-05 01:07:17.734306 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.42s 2026-01-05 01:07:17.734310 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.37s 2026-01-05 01:07:17.734315 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.36s 2026-01-05 01:07:17.734319 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.24s 2026-01-05 01:07:17.734324 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.96s 2026-01-05 01:07:17.734328 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.96s 2026-01-05 01:07:17.734333 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.88s 2026-01-05 01:07:17.734337 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 0.81s 2026-01-05 01:07:17.734342 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.79s 2026-01-05 01:07:17.734346 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 0.75s 2026-01-05 01:07:17.734350 | orchestrator | Include rabbitmq post-deploy.yml ---------------------------------------- 0.54s 2026-01-05 01:07:20.259810 | orchestrator | 2026-01-05 01:07:20 | INFO  | Task fdac1e05-db50-44a7-abcf-55317bcafff6 (openvswitch) was prepared for execution. 2026-01-05 01:07:20.259907 | orchestrator | 2026-01-05 01:07:20 | INFO  | It takes a moment until task fdac1e05-db50-44a7-abcf-55317bcafff6 (openvswitch) has been started and output is visible here. 2026-01-05 01:07:33.476174 | orchestrator | 2026-01-05 01:07:33.476320 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 01:07:33.476331 | orchestrator | 2026-01-05 01:07:33.476338 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 01:07:33.476345 | orchestrator | Monday 05 January 2026 01:07:24 +0000 (0:00:00.277) 0:00:00.277 ******** 2026-01-05 01:07:33.476353 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:07:33.476361 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:07:33.476367 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:07:33.476374 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:07:33.476380 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:07:33.476386 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:07:33.476393 | orchestrator | 2026-01-05 01:07:33.476399 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 01:07:33.476406 | orchestrator | Monday 05 January 2026 01:07:25 +0000 (0:00:00.706) 0:00:00.984 ******** 2026-01-05 01:07:33.476412 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-05 01:07:33.476419 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-05 01:07:33.476426 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-05 01:07:33.476432 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-05 01:07:33.476438 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-05 01:07:33.476445 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-05 01:07:33.476451 | orchestrator | 2026-01-05 01:07:33.476457 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-01-05 01:07:33.476463 | orchestrator | 2026-01-05 01:07:33.476470 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-01-05 01:07:33.476476 | orchestrator | Monday 05 January 2026 01:07:25 +0000 (0:00:00.614) 0:00:01.598 ******** 2026-01-05 01:07:33.476484 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:07:33.476491 | orchestrator | 2026-01-05 01:07:33.476498 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-01-05 01:07:33.476504 | orchestrator | Monday 05 January 2026 01:07:27 +0000 (0:00:01.192) 0:00:02.791 ******** 2026-01-05 01:07:33.476511 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-01-05 01:07:33.476517 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-01-05 01:07:33.476523 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-01-05 01:07:33.476530 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-01-05 01:07:33.476536 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-01-05 01:07:33.476542 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-01-05 01:07:33.476548 | orchestrator | 2026-01-05 01:07:33.476597 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-01-05 01:07:33.476605 | orchestrator | Monday 05 January 2026 01:07:28 +0000 (0:00:01.315) 0:00:04.107 ******** 2026-01-05 01:07:33.476697 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-01-05 01:07:33.476711 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-01-05 01:07:33.476719 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-01-05 01:07:33.476727 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-01-05 01:07:33.476734 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-01-05 01:07:33.476742 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-01-05 01:07:33.476749 | orchestrator | 2026-01-05 01:07:33.476756 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-01-05 01:07:33.476764 | orchestrator | Monday 05 January 2026 01:07:30 +0000 (0:00:01.548) 0:00:05.655 ******** 2026-01-05 01:07:33.476786 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-01-05 01:07:33.476794 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:07:33.476802 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-01-05 01:07:33.476809 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:07:33.476817 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-01-05 01:07:33.476824 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:07:33.476831 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-01-05 01:07:33.476839 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:07:33.476847 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-01-05 01:07:33.476854 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:07:33.476862 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-01-05 01:07:33.476869 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:07:33.476877 | orchestrator | 2026-01-05 01:07:33.476884 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-01-05 01:07:33.476890 | orchestrator | Monday 05 January 2026 01:07:31 +0000 (0:00:01.249) 0:00:06.905 ******** 2026-01-05 01:07:33.476896 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:07:33.476902 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:07:33.476908 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:07:33.476914 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:07:33.476920 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:07:33.476926 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:07:33.476933 | orchestrator | 2026-01-05 01:07:33.476939 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-01-05 01:07:33.476945 | orchestrator | Monday 05 January 2026 01:07:32 +0000 (0:00:00.767) 0:00:07.672 ******** 2026-01-05 01:07:33.476969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 01:07:33.476979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 01:07:33.476986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 01:07:33.477001 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 01:07:33.477008 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 01:07:33.477019 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 01:07:36.096719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 01:07:36.096831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 01:07:36.096865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 01:07:36.096902 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 01:07:36.096914 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 01:07:36.096945 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 01:07:36.096958 | orchestrator | 2026-01-05 01:07:36.096972 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-01-05 01:07:36.096986 | orchestrator | Monday 05 January 2026 01:07:33 +0000 (0:00:01.493) 0:00:09.166 ******** 2026-01-05 01:07:36.096998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 01:07:36.097012 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 01:07:36.097037 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 01:07:36.097050 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 01:07:36.097062 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 01:07:36.097082 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 01:07:39.215604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 01:07:39.215687 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 01:07:39.215722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 01:07:39.215727 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 01:07:39.215731 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 01:07:39.215746 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 01:07:39.215750 | orchestrator | 2026-01-05 01:07:39.215756 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-01-05 01:07:39.215761 | orchestrator | Monday 05 January 2026 01:07:36 +0000 (0:00:02.647) 0:00:11.813 ******** 2026-01-05 01:07:39.215765 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:07:39.215770 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:07:39.215774 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:07:39.215782 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:07:39.215786 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:07:39.215789 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:07:39.215793 | orchestrator | 2026-01-05 01:07:39.215797 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-01-05 01:07:39.215801 | orchestrator | Monday 05 January 2026 01:07:37 +0000 (0:00:01.024) 0:00:12.838 ******** 2026-01-05 01:07:39.215805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 01:07:39.215814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 01:07:39.215818 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 01:07:39.215822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 01:07:39.215831 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 01:08:00.133777 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 01:08:00.133932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 01:08:00.133961 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 01:08:00.133977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 01:08:00.133992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 01:08:00.134111 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 01:08:00.134186 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 01:08:00.134204 | orchestrator | 2026-01-05 01:08:00.134222 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-05 01:08:00.134239 | orchestrator | Monday 05 January 2026 01:07:39 +0000 (0:00:02.068) 0:00:14.907 ******** 2026-01-05 01:08:00.134250 | orchestrator | 2026-01-05 01:08:00.134261 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-05 01:08:00.134272 | orchestrator | Monday 05 January 2026 01:07:39 +0000 (0:00:00.252) 0:00:15.159 ******** 2026-01-05 01:08:00.134282 | orchestrator | 2026-01-05 01:08:00.134301 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-05 01:08:00.134312 | orchestrator | Monday 05 January 2026 01:07:39 +0000 (0:00:00.129) 0:00:15.289 ******** 2026-01-05 01:08:00.134322 | orchestrator | 2026-01-05 01:08:00.134333 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-05 01:08:00.134344 | orchestrator | Monday 05 January 2026 01:07:39 +0000 (0:00:00.120) 0:00:15.410 ******** 2026-01-05 01:08:00.134354 | orchestrator | 2026-01-05 01:08:00.134366 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-05 01:08:00.134376 | orchestrator | Monday 05 January 2026 01:07:39 +0000 (0:00:00.120) 0:00:15.531 ******** 2026-01-05 01:08:00.134387 | orchestrator | 2026-01-05 01:08:00.134398 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-05 01:08:00.134410 | orchestrator | Monday 05 January 2026 01:07:40 +0000 (0:00:00.121) 0:00:15.652 ******** 2026-01-05 01:08:00.134420 | orchestrator | 2026-01-05 01:08:00.134430 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-01-05 01:08:00.134439 | orchestrator | Monday 05 January 2026 01:07:40 +0000 (0:00:00.124) 0:00:15.777 ******** 2026-01-05 01:08:00.134448 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:08:00.134458 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:08:00.134467 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:08:00.134475 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:08:00.134484 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:08:00.134492 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:08:00.134501 | orchestrator | 2026-01-05 01:08:00.134510 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-01-05 01:08:00.134533 | orchestrator | Monday 05 January 2026 01:07:48 +0000 (0:00:08.503) 0:00:24.281 ******** 2026-01-05 01:08:00.134542 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:08:00.134552 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:08:00.134642 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:08:00.134664 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:08:00.134679 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:08:00.134693 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:08:00.134707 | orchestrator | 2026-01-05 01:08:00.134720 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-01-05 01:08:00.134749 | orchestrator | Monday 05 January 2026 01:07:49 +0000 (0:00:01.155) 0:00:25.436 ******** 2026-01-05 01:08:00.134763 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:08:00.134778 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:08:00.134792 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:08:00.134805 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:08:00.134818 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:08:00.134831 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:08:00.134844 | orchestrator | 2026-01-05 01:08:00.134858 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-01-05 01:08:00.134872 | orchestrator | Monday 05 January 2026 01:07:52 +0000 (0:00:03.154) 0:00:28.591 ******** 2026-01-05 01:08:00.134887 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-01-05 01:08:00.134901 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-01-05 01:08:00.134916 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-01-05 01:08:00.134931 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-01-05 01:08:00.134946 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-01-05 01:08:00.134961 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-01-05 01:08:00.134976 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-01-05 01:08:00.135006 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-01-05 01:08:13.891703 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-01-05 01:08:13.891792 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-01-05 01:08:13.891798 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-01-05 01:08:13.891802 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-01-05 01:08:13.891807 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-05 01:08:13.891812 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-05 01:08:13.891815 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-05 01:08:13.891819 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-05 01:08:13.891823 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-05 01:08:13.891827 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-05 01:08:13.891831 | orchestrator | 2026-01-05 01:08:13.891849 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-01-05 01:08:13.891855 | orchestrator | Monday 05 January 2026 01:08:00 +0000 (0:00:07.143) 0:00:35.735 ******** 2026-01-05 01:08:13.891860 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-01-05 01:08:13.891864 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:08:13.891869 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-01-05 01:08:13.891873 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:08:13.891877 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-01-05 01:08:13.891896 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:08:13.891900 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-01-05 01:08:13.891904 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-01-05 01:08:13.891908 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-01-05 01:08:13.891911 | orchestrator | 2026-01-05 01:08:13.891915 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-01-05 01:08:13.891919 | orchestrator | Monday 05 January 2026 01:08:02 +0000 (0:00:02.737) 0:00:38.472 ******** 2026-01-05 01:08:13.891923 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-01-05 01:08:13.891927 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:08:13.891931 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-01-05 01:08:13.891935 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:08:13.891938 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-01-05 01:08:13.891942 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:08:13.891946 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-01-05 01:08:13.891950 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-01-05 01:08:13.891953 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-01-05 01:08:13.891957 | orchestrator | 2026-01-05 01:08:13.891961 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-01-05 01:08:13.891965 | orchestrator | Monday 05 January 2026 01:08:06 +0000 (0:00:03.407) 0:00:41.879 ******** 2026-01-05 01:08:13.891969 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:08:13.891972 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:08:13.891976 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:08:13.891980 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:08:13.891984 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:08:13.891987 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:08:13.891992 | orchestrator | 2026-01-05 01:08:13.891998 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:08:13.892007 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-05 01:08:13.892015 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-05 01:08:13.892021 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-05 01:08:13.892027 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-05 01:08:13.892033 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-05 01:08:13.892039 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-05 01:08:13.892045 | orchestrator | 2026-01-05 01:08:13.892050 | orchestrator | 2026-01-05 01:08:13.892056 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:08:13.892063 | orchestrator | Monday 05 January 2026 01:08:13 +0000 (0:00:07.180) 0:00:49.060 ******** 2026-01-05 01:08:13.892083 | orchestrator | =============================================================================== 2026-01-05 01:08:13.892091 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 10.34s 2026-01-05 01:08:13.892098 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 8.50s 2026-01-05 01:08:13.892105 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.14s 2026-01-05 01:08:13.892111 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.41s 2026-01-05 01:08:13.892178 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.74s 2026-01-05 01:08:13.892187 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.65s 2026-01-05 01:08:13.892194 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.07s 2026-01-05 01:08:13.892201 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.55s 2026-01-05 01:08:13.892208 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.49s 2026-01-05 01:08:13.892212 | orchestrator | module-load : Load modules ---------------------------------------------- 1.32s 2026-01-05 01:08:13.892215 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.25s 2026-01-05 01:08:13.892219 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.19s 2026-01-05 01:08:13.892225 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.16s 2026-01-05 01:08:13.892231 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.02s 2026-01-05 01:08:13.892242 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 0.87s 2026-01-05 01:08:13.892248 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.77s 2026-01-05 01:08:13.892254 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.71s 2026-01-05 01:08:13.892260 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.61s 2026-01-05 01:08:16.435526 | orchestrator | 2026-01-05 01:08:16 | INFO  | Task 781fba6f-0797-42c7-a963-b10e217a470b (ovn) was prepared for execution. 2026-01-05 01:08:16.435624 | orchestrator | 2026-01-05 01:08:16 | INFO  | It takes a moment until task 781fba6f-0797-42c7-a963-b10e217a470b (ovn) has been started and output is visible here. 2026-01-05 01:08:27.920838 | orchestrator | 2026-01-05 01:08:27.920962 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 01:08:27.920976 | orchestrator | 2026-01-05 01:08:27.920988 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 01:08:27.920998 | orchestrator | Monday 05 January 2026 01:08:21 +0000 (0:00:00.186) 0:00:00.186 ******** 2026-01-05 01:08:27.921007 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:08:27.921020 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:08:27.921035 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:08:27.921050 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:08:27.921066 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:08:27.921081 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:08:27.921093 | orchestrator | 2026-01-05 01:08:27.921130 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 01:08:27.921142 | orchestrator | Monday 05 January 2026 01:08:21 +0000 (0:00:00.735) 0:00:00.921 ******** 2026-01-05 01:08:27.921151 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-01-05 01:08:27.921160 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-01-05 01:08:27.921172 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-01-05 01:08:27.921187 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-01-05 01:08:27.921202 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-01-05 01:08:27.921217 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-01-05 01:08:27.921232 | orchestrator | 2026-01-05 01:08:27.921244 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-01-05 01:08:27.921254 | orchestrator | 2026-01-05 01:08:27.921263 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-01-05 01:08:27.921272 | orchestrator | Monday 05 January 2026 01:08:22 +0000 (0:00:00.821) 0:00:01.743 ******** 2026-01-05 01:08:27.921282 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:08:27.921292 | orchestrator | 2026-01-05 01:08:27.921301 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-01-05 01:08:27.921332 | orchestrator | Monday 05 January 2026 01:08:23 +0000 (0:00:01.169) 0:00:02.913 ******** 2026-01-05 01:08:27.921345 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:08:27.921359 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:08:27.921370 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:08:27.921382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:08:27.921409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:08:27.921437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:08:27.921451 | orchestrator | 2026-01-05 01:08:27.921465 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-01-05 01:08:27.921481 | orchestrator | Monday 05 January 2026 01:08:25 +0000 (0:00:01.224) 0:00:04.137 ******** 2026-01-05 01:08:27.921497 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:08:27.921513 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:08:27.921541 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:08:27.921557 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:08:27.921573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:08:27.921584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:08:27.921594 | orchestrator | 2026-01-05 01:08:27.921604 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-01-05 01:08:27.921615 | orchestrator | Monday 05 January 2026 01:08:26 +0000 (0:00:01.594) 0:00:05.732 ******** 2026-01-05 01:08:27.921632 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:08:27.921644 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:08:27.921662 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:08:54.501640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:08:54.501750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:08:54.501758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:08:54.501764 | orchestrator | 2026-01-05 01:08:54.501770 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-01-05 01:08:54.501776 | orchestrator | Monday 05 January 2026 01:08:27 +0000 (0:00:01.162) 0:00:06.894 ******** 2026-01-05 01:08:54.501781 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:08:54.501786 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:08:54.501791 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:08:54.501807 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:08:54.501812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:08:54.501828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:08:54.501833 | orchestrator | 2026-01-05 01:08:54.501843 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-01-05 01:08:54.501848 | orchestrator | Monday 05 January 2026 01:08:29 +0000 (0:00:01.546) 0:00:08.441 ******** 2026-01-05 01:08:54.501852 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:08:54.501857 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:08:54.501862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:08:54.501867 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:08:54.501871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:08:54.501876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:08:54.501881 | orchestrator | 2026-01-05 01:08:54.501885 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-01-05 01:08:54.501890 | orchestrator | Monday 05 January 2026 01:08:30 +0000 (0:00:01.465) 0:00:09.907 ******** 2026-01-05 01:08:54.501895 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:08:54.501901 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:08:54.501905 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:08:54.501913 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:08:54.501918 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:08:54.501922 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:08:54.501927 | orchestrator | 2026-01-05 01:08:54.501931 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-01-05 01:08:54.501936 | orchestrator | Monday 05 January 2026 01:08:33 +0000 (0:00:02.973) 0:00:12.880 ******** 2026-01-05 01:08:54.501941 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-01-05 01:08:54.501951 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-01-05 01:08:54.501956 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-01-05 01:08:54.501960 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-01-05 01:08:54.501965 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-01-05 01:08:54.501970 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-01-05 01:08:54.501978 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-05 01:09:29.745451 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-05 01:09:29.745628 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-05 01:09:29.745656 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-05 01:09:29.745673 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-05 01:09:29.745688 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-05 01:09:29.745705 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-05 01:09:29.745723 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-05 01:09:29.745739 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-05 01:09:29.745755 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-05 01:09:29.745770 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-05 01:09:29.745786 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-05 01:09:29.745802 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-05 01:09:29.745820 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-05 01:09:29.745836 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-05 01:09:29.745851 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-05 01:09:29.745866 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-05 01:09:29.745882 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-05 01:09:29.745897 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-05 01:09:29.745914 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-05 01:09:29.745930 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-05 01:09:29.745945 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-05 01:09:29.745961 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-05 01:09:29.745976 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-05 01:09:29.745991 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-05 01:09:29.746124 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-05 01:09:29.746139 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-05 01:09:29.746150 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-05 01:09:29.746161 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-05 01:09:29.746172 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-05 01:09:29.746198 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-05 01:09:29.746208 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-05 01:09:29.746217 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-05 01:09:29.746226 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-05 01:09:29.746234 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-05 01:09:29.746243 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-05 01:09:29.746258 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-01-05 01:09:29.746309 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-01-05 01:09:29.746329 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-01-05 01:09:29.746343 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-01-05 01:09:29.746357 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-01-05 01:09:29.746372 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-01-05 01:09:29.746387 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-05 01:09:29.746402 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-05 01:09:29.746417 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-05 01:09:29.746431 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-05 01:09:29.746444 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-05 01:09:29.746453 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-05 01:09:29.746461 | orchestrator | 2026-01-05 01:09:29.746496 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-05 01:09:29.746505 | orchestrator | Monday 05 January 2026 01:08:53 +0000 (0:00:19.907) 0:00:32.788 ******** 2026-01-05 01:09:29.746514 | orchestrator | 2026-01-05 01:09:29.746522 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-05 01:09:29.746530 | orchestrator | Monday 05 January 2026 01:08:54 +0000 (0:00:00.275) 0:00:33.064 ******** 2026-01-05 01:09:29.746537 | orchestrator | 2026-01-05 01:09:29.746545 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-05 01:09:29.746565 | orchestrator | Monday 05 January 2026 01:08:54 +0000 (0:00:00.106) 0:00:33.171 ******** 2026-01-05 01:09:29.746573 | orchestrator | 2026-01-05 01:09:29.746581 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-05 01:09:29.746588 | orchestrator | Monday 05 January 2026 01:08:54 +0000 (0:00:00.075) 0:00:33.247 ******** 2026-01-05 01:09:29.746596 | orchestrator | 2026-01-05 01:09:29.746604 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-05 01:09:29.746612 | orchestrator | Monday 05 January 2026 01:08:54 +0000 (0:00:00.068) 0:00:33.315 ******** 2026-01-05 01:09:29.746620 | orchestrator | 2026-01-05 01:09:29.746627 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-05 01:09:29.746635 | orchestrator | Monday 05 January 2026 01:08:54 +0000 (0:00:00.064) 0:00:33.379 ******** 2026-01-05 01:09:29.746643 | orchestrator | 2026-01-05 01:09:29.746651 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-01-05 01:09:29.746658 | orchestrator | Monday 05 January 2026 01:08:54 +0000 (0:00:00.091) 0:00:33.471 ******** 2026-01-05 01:09:29.746666 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:09:29.746676 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:09:29.746684 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:09:29.746692 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:09:29.746699 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:09:29.746707 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:09:29.746715 | orchestrator | 2026-01-05 01:09:29.746737 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-01-05 01:09:29.746754 | orchestrator | Monday 05 January 2026 01:08:56 +0000 (0:00:01.695) 0:00:35.166 ******** 2026-01-05 01:09:29.746762 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:09:29.746770 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:09:29.746778 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:09:29.746786 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:09:29.746795 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:09:29.746803 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:09:29.746810 | orchestrator | 2026-01-05 01:09:29.746826 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-01-05 01:09:29.746841 | orchestrator | 2026-01-05 01:09:29.746859 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-05 01:09:29.746874 | orchestrator | Monday 05 January 2026 01:09:27 +0000 (0:00:31.242) 0:01:06.409 ******** 2026-01-05 01:09:29.746888 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:09:29.746902 | orchestrator | 2026-01-05 01:09:29.746914 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-05 01:09:29.746926 | orchestrator | Monday 05 January 2026 01:09:28 +0000 (0:00:00.717) 0:01:07.126 ******** 2026-01-05 01:09:29.746939 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:09:29.746951 | orchestrator | 2026-01-05 01:09:29.746963 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-01-05 01:09:29.746977 | orchestrator | Monday 05 January 2026 01:09:28 +0000 (0:00:00.528) 0:01:07.655 ******** 2026-01-05 01:09:29.746990 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:09:29.747003 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:09:29.747016 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:09:29.747054 | orchestrator | 2026-01-05 01:09:29.747068 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-01-05 01:09:29.747093 | orchestrator | Monday 05 January 2026 01:09:29 +0000 (0:00:01.063) 0:01:08.718 ******** 2026-01-05 01:09:41.410228 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:09:41.410380 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:09:41.410407 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:09:41.410427 | orchestrator | 2026-01-05 01:09:41.410446 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-01-05 01:09:41.410498 | orchestrator | Monday 05 January 2026 01:09:30 +0000 (0:00:00.353) 0:01:09.072 ******** 2026-01-05 01:09:41.410518 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:09:41.410537 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:09:41.410555 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:09:41.410574 | orchestrator | 2026-01-05 01:09:41.410593 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-01-05 01:09:41.410612 | orchestrator | Monday 05 January 2026 01:09:30 +0000 (0:00:00.324) 0:01:09.396 ******** 2026-01-05 01:09:41.410630 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:09:41.410648 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:09:41.410666 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:09:41.410686 | orchestrator | 2026-01-05 01:09:41.410706 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-01-05 01:09:41.410726 | orchestrator | Monday 05 January 2026 01:09:30 +0000 (0:00:00.377) 0:01:09.773 ******** 2026-01-05 01:09:41.410745 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:09:41.410764 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:09:41.410784 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:09:41.410804 | orchestrator | 2026-01-05 01:09:41.410825 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-01-05 01:09:41.410845 | orchestrator | Monday 05 January 2026 01:09:31 +0000 (0:00:00.335) 0:01:10.109 ******** 2026-01-05 01:09:41.410865 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:09:41.410887 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:09:41.410906 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:09:41.410925 | orchestrator | 2026-01-05 01:09:41.410943 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-01-05 01:09:41.410961 | orchestrator | Monday 05 January 2026 01:09:31 +0000 (0:00:00.558) 0:01:10.667 ******** 2026-01-05 01:09:41.410978 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:09:41.410996 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:09:41.411042 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:09:41.411063 | orchestrator | 2026-01-05 01:09:41.411081 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-01-05 01:09:41.411099 | orchestrator | Monday 05 January 2026 01:09:31 +0000 (0:00:00.296) 0:01:10.964 ******** 2026-01-05 01:09:41.411117 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:09:41.411135 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:09:41.411155 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:09:41.411172 | orchestrator | 2026-01-05 01:09:41.411189 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-01-05 01:09:41.411207 | orchestrator | Monday 05 January 2026 01:09:32 +0000 (0:00:00.331) 0:01:11.296 ******** 2026-01-05 01:09:41.411226 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:09:41.411245 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:09:41.411263 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:09:41.411280 | orchestrator | 2026-01-05 01:09:41.411298 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-01-05 01:09:41.411318 | orchestrator | Monday 05 January 2026 01:09:32 +0000 (0:00:00.299) 0:01:11.596 ******** 2026-01-05 01:09:41.411336 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:09:41.411353 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:09:41.411372 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:09:41.411390 | orchestrator | 2026-01-05 01:09:41.411408 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-01-05 01:09:41.411426 | orchestrator | Monday 05 January 2026 01:09:33 +0000 (0:00:00.511) 0:01:12.107 ******** 2026-01-05 01:09:41.411437 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:09:41.411449 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:09:41.411460 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:09:41.411470 | orchestrator | 2026-01-05 01:09:41.411481 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-01-05 01:09:41.411493 | orchestrator | Monday 05 January 2026 01:09:33 +0000 (0:00:00.324) 0:01:12.431 ******** 2026-01-05 01:09:41.411521 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:09:41.411532 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:09:41.411543 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:09:41.411555 | orchestrator | 2026-01-05 01:09:41.411566 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-01-05 01:09:41.411577 | orchestrator | Monday 05 January 2026 01:09:33 +0000 (0:00:00.320) 0:01:12.752 ******** 2026-01-05 01:09:41.411588 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:09:41.411600 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:09:41.411610 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:09:41.411621 | orchestrator | 2026-01-05 01:09:41.411648 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-01-05 01:09:41.411660 | orchestrator | Monday 05 January 2026 01:09:34 +0000 (0:00:00.417) 0:01:13.169 ******** 2026-01-05 01:09:41.411671 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:09:41.411681 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:09:41.411692 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:09:41.411703 | orchestrator | 2026-01-05 01:09:41.411714 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-01-05 01:09:41.411725 | orchestrator | Monday 05 January 2026 01:09:34 +0000 (0:00:00.530) 0:01:13.699 ******** 2026-01-05 01:09:41.411736 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:09:41.411747 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:09:41.411757 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:09:41.411768 | orchestrator | 2026-01-05 01:09:41.411779 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-01-05 01:09:41.411790 | orchestrator | Monday 05 January 2026 01:09:35 +0000 (0:00:00.374) 0:01:14.074 ******** 2026-01-05 01:09:41.411801 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:09:41.411812 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:09:41.411823 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:09:41.411833 | orchestrator | 2026-01-05 01:09:41.411844 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-01-05 01:09:41.411855 | orchestrator | Monday 05 January 2026 01:09:35 +0000 (0:00:00.336) 0:01:14.410 ******** 2026-01-05 01:09:41.411893 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:09:41.411905 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:09:41.411916 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:09:41.411927 | orchestrator | 2026-01-05 01:09:41.411938 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-05 01:09:41.411948 | orchestrator | Monday 05 January 2026 01:09:35 +0000 (0:00:00.307) 0:01:14.718 ******** 2026-01-05 01:09:41.411960 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:09:41.411971 | orchestrator | 2026-01-05 01:09:41.411982 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-01-05 01:09:41.411993 | orchestrator | Monday 05 January 2026 01:09:36 +0000 (0:00:00.776) 0:01:15.495 ******** 2026-01-05 01:09:41.412004 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:09:41.412048 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:09:41.412067 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:09:41.412086 | orchestrator | 2026-01-05 01:09:41.412105 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-01-05 01:09:41.412124 | orchestrator | Monday 05 January 2026 01:09:37 +0000 (0:00:00.535) 0:01:16.031 ******** 2026-01-05 01:09:41.412141 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:09:41.412152 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:09:41.412163 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:09:41.412173 | orchestrator | 2026-01-05 01:09:41.412184 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-01-05 01:09:41.412195 | orchestrator | Monday 05 January 2026 01:09:37 +0000 (0:00:00.427) 0:01:16.458 ******** 2026-01-05 01:09:41.412206 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:09:41.412217 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:09:41.412241 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:09:41.412253 | orchestrator | 2026-01-05 01:09:41.412264 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-01-05 01:09:41.412274 | orchestrator | Monday 05 January 2026 01:09:37 +0000 (0:00:00.357) 0:01:16.816 ******** 2026-01-05 01:09:41.412285 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:09:41.412296 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:09:41.412307 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:09:41.412317 | orchestrator | 2026-01-05 01:09:41.412328 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-01-05 01:09:41.412339 | orchestrator | Monday 05 January 2026 01:09:38 +0000 (0:00:00.532) 0:01:17.348 ******** 2026-01-05 01:09:41.412349 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:09:41.412360 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:09:41.412371 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:09:41.412382 | orchestrator | 2026-01-05 01:09:41.412392 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-01-05 01:09:41.412403 | orchestrator | Monday 05 January 2026 01:09:38 +0000 (0:00:00.324) 0:01:17.673 ******** 2026-01-05 01:09:41.412414 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:09:41.412425 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:09:41.412435 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:09:41.412446 | orchestrator | 2026-01-05 01:09:41.412457 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-01-05 01:09:41.412468 | orchestrator | Monday 05 January 2026 01:09:39 +0000 (0:00:00.331) 0:01:18.004 ******** 2026-01-05 01:09:41.412478 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:09:41.412489 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:09:41.412500 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:09:41.412510 | orchestrator | 2026-01-05 01:09:41.412521 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-01-05 01:09:41.412532 | orchestrator | Monday 05 January 2026 01:09:39 +0000 (0:00:00.331) 0:01:18.335 ******** 2026-01-05 01:09:41.412542 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:09:41.412553 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:09:41.412564 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:09:41.412575 | orchestrator | 2026-01-05 01:09:41.412586 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-01-05 01:09:41.412597 | orchestrator | Monday 05 January 2026 01:09:39 +0000 (0:00:00.545) 0:01:18.880 ******** 2026-01-05 01:09:41.412616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:09:41.412631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:09:41.412643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:09:41.412666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:09:48.060336 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:09:48.060472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:09:48.060496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:09:48.060513 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:09:48.060529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:09:48.060545 | orchestrator | 2026-01-05 01:09:48.060563 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-01-05 01:09:48.060581 | orchestrator | Monday 05 January 2026 01:09:41 +0000 (0:00:01.505) 0:01:20.386 ******** 2026-01-05 01:09:48.060597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:09:48.060630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:09:48.060640 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:09:48.060650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:09:48.060701 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:09:48.060711 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:09:48.060720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:09:48.060729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:09:48.060738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:09:48.060747 | orchestrator | 2026-01-05 01:09:48.060756 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-01-05 01:09:48.060765 | orchestrator | Monday 05 January 2026 01:09:45 +0000 (0:00:04.008) 0:01:24.394 ******** 2026-01-05 01:09:48.060774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:09:48.060783 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:09:48.060797 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:09:48.060806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:09:48.060834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:09:48.060852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:10:13.515754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:10:13.516704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:10:13.516750 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:10:13.516756 | orchestrator | 2026-01-05 01:10:13.516763 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-05 01:10:13.516769 | orchestrator | Monday 05 January 2026 01:09:47 +0000 (0:00:02.197) 0:01:26.591 ******** 2026-01-05 01:10:13.516772 | orchestrator | 2026-01-05 01:10:13.516777 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-05 01:10:13.516781 | orchestrator | Monday 05 January 2026 01:09:47 +0000 (0:00:00.071) 0:01:26.663 ******** 2026-01-05 01:10:13.516784 | orchestrator | 2026-01-05 01:10:13.516788 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-05 01:10:13.516792 | orchestrator | Monday 05 January 2026 01:09:47 +0000 (0:00:00.065) 0:01:26.729 ******** 2026-01-05 01:10:13.516796 | orchestrator | 2026-01-05 01:10:13.516800 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-01-05 01:10:13.516804 | orchestrator | Monday 05 January 2026 01:09:48 +0000 (0:00:00.298) 0:01:27.027 ******** 2026-01-05 01:10:13.516808 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:10:13.516814 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:10:13.516821 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:10:13.516827 | orchestrator | 2026-01-05 01:10:13.516833 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-01-05 01:10:13.516840 | orchestrator | Monday 05 January 2026 01:09:55 +0000 (0:00:07.396) 0:01:34.424 ******** 2026-01-05 01:10:13.516869 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:10:13.516875 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:10:13.516881 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:10:13.516886 | orchestrator | 2026-01-05 01:10:13.516891 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-01-05 01:10:13.516897 | orchestrator | Monday 05 January 2026 01:09:58 +0000 (0:00:03.350) 0:01:37.775 ******** 2026-01-05 01:10:13.516903 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:10:13.516922 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:10:13.516929 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:10:13.516935 | orchestrator | 2026-01-05 01:10:13.516941 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-01-05 01:10:13.516947 | orchestrator | Monday 05 January 2026 01:10:06 +0000 (0:00:07.499) 0:01:45.275 ******** 2026-01-05 01:10:13.516953 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:10:13.516959 | orchestrator | 2026-01-05 01:10:13.516965 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-01-05 01:10:13.517003 | orchestrator | Monday 05 January 2026 01:10:06 +0000 (0:00:00.130) 0:01:45.406 ******** 2026-01-05 01:10:13.517011 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:10:13.517018 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:10:13.517024 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:10:13.517045 | orchestrator | 2026-01-05 01:10:13.517051 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-01-05 01:10:13.517056 | orchestrator | Monday 05 January 2026 01:10:07 +0000 (0:00:01.057) 0:01:46.463 ******** 2026-01-05 01:10:13.517061 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:10:13.517067 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:10:13.517073 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:10:13.517078 | orchestrator | 2026-01-05 01:10:13.517083 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-01-05 01:10:13.517087 | orchestrator | Monday 05 January 2026 01:10:08 +0000 (0:00:00.679) 0:01:47.142 ******** 2026-01-05 01:10:13.517094 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:10:13.517100 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:10:13.517109 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:10:13.517116 | orchestrator | 2026-01-05 01:10:13.517124 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-01-05 01:10:13.517140 | orchestrator | Monday 05 January 2026 01:10:09 +0000 (0:00:00.883) 0:01:48.025 ******** 2026-01-05 01:10:13.517146 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:10:13.517152 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:10:13.517158 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:10:13.517164 | orchestrator | 2026-01-05 01:10:13.517170 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-01-05 01:10:13.517176 | orchestrator | Monday 05 January 2026 01:10:09 +0000 (0:00:00.640) 0:01:48.666 ******** 2026-01-05 01:10:13.517182 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:10:13.517189 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:10:13.517217 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:10:13.517223 | orchestrator | 2026-01-05 01:10:13.517228 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-01-05 01:10:13.517235 | orchestrator | Monday 05 January 2026 01:10:10 +0000 (0:00:00.823) 0:01:49.489 ******** 2026-01-05 01:10:13.517241 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:10:13.517247 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:10:13.517253 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:10:13.517259 | orchestrator | 2026-01-05 01:10:13.517265 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-01-05 01:10:13.517270 | orchestrator | Monday 05 January 2026 01:10:11 +0000 (0:00:01.113) 0:01:50.603 ******** 2026-01-05 01:10:13.517276 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:10:13.517281 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:10:13.517286 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:10:13.517293 | orchestrator | 2026-01-05 01:10:13.517300 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-01-05 01:10:13.517317 | orchestrator | Monday 05 January 2026 01:10:11 +0000 (0:00:00.327) 0:01:50.931 ******** 2026-01-05 01:10:13.517325 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:10:13.517334 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:10:13.517341 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:10:13.517348 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:10:13.517362 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:10:13.517367 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:10:13.517371 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:10:13.517375 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:10:13.517387 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:10:20.859480 | orchestrator | 2026-01-05 01:10:20.859583 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-01-05 01:10:20.859596 | orchestrator | Monday 05 January 2026 01:10:13 +0000 (0:00:01.554) 0:01:52.485 ******** 2026-01-05 01:10:20.859607 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:10:20.859618 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:10:20.859626 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:10:20.859635 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:10:20.859646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:10:20.859669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:10:20.859679 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:10:20.859702 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:10:20.859719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:10:20.859754 | orchestrator | 2026-01-05 01:10:20.859777 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-01-05 01:10:20.859792 | orchestrator | Monday 05 January 2026 01:10:17 +0000 (0:00:03.988) 0:01:56.473 ******** 2026-01-05 01:10:20.859826 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:10:20.859842 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:10:20.859857 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:10:20.859872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:10:20.859882 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:10:20.859890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:10:20.859904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:10:20.859912 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:10:20.859920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 01:10:20.859936 | orchestrator | 2026-01-05 01:10:20.859945 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-05 01:10:20.859953 | orchestrator | Monday 05 January 2026 01:10:20 +0000 (0:00:03.142) 0:01:59.616 ******** 2026-01-05 01:10:20.860029 | orchestrator | 2026-01-05 01:10:20.860041 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-05 01:10:20.860052 | orchestrator | Monday 05 January 2026 01:10:20 +0000 (0:00:00.065) 0:01:59.681 ******** 2026-01-05 01:10:20.860061 | orchestrator | 2026-01-05 01:10:20.860070 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-05 01:10:20.860080 | orchestrator | Monday 05 January 2026 01:10:20 +0000 (0:00:00.068) 0:01:59.750 ******** 2026-01-05 01:10:20.860089 | orchestrator | 2026-01-05 01:10:20.860106 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-01-05 01:10:45.495572 | orchestrator | Monday 05 January 2026 01:10:20 +0000 (0:00:00.066) 0:01:59.817 ******** 2026-01-05 01:10:45.495732 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:10:45.495756 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:10:45.495769 | orchestrator | 2026-01-05 01:10:45.495781 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-01-05 01:10:45.495793 | orchestrator | Monday 05 January 2026 01:10:27 +0000 (0:00:06.215) 0:02:06.033 ******** 2026-01-05 01:10:45.495804 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:10:45.495815 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:10:45.495827 | orchestrator | 2026-01-05 01:10:45.495838 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-01-05 01:10:45.495849 | orchestrator | Monday 05 January 2026 01:10:33 +0000 (0:00:06.207) 0:02:12.241 ******** 2026-01-05 01:10:45.495860 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:10:45.495871 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:10:45.495882 | orchestrator | 2026-01-05 01:10:45.495894 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-01-05 01:10:45.495905 | orchestrator | Monday 05 January 2026 01:10:39 +0000 (0:00:06.219) 0:02:18.460 ******** 2026-01-05 01:10:45.495916 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:10:45.495926 | orchestrator | 2026-01-05 01:10:45.495963 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-01-05 01:10:45.495974 | orchestrator | Monday 05 January 2026 01:10:39 +0000 (0:00:00.150) 0:02:18.610 ******** 2026-01-05 01:10:45.495985 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:10:45.495997 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:10:45.496008 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:10:45.496019 | orchestrator | 2026-01-05 01:10:45.496030 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-01-05 01:10:45.496043 | orchestrator | Monday 05 January 2026 01:10:40 +0000 (0:00:01.059) 0:02:19.670 ******** 2026-01-05 01:10:45.496056 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:10:45.496068 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:10:45.496086 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:10:45.496105 | orchestrator | 2026-01-05 01:10:45.496118 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-01-05 01:10:45.496132 | orchestrator | Monday 05 January 2026 01:10:41 +0000 (0:00:00.691) 0:02:20.362 ******** 2026-01-05 01:10:45.496144 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:10:45.496157 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:10:45.496170 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:10:45.496183 | orchestrator | 2026-01-05 01:10:45.496196 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-01-05 01:10:45.496209 | orchestrator | Monday 05 January 2026 01:10:42 +0000 (0:00:00.810) 0:02:21.172 ******** 2026-01-05 01:10:45.496237 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:10:45.496250 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:10:45.496263 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:10:45.496302 | orchestrator | 2026-01-05 01:10:45.496316 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-01-05 01:10:45.496328 | orchestrator | Monday 05 January 2026 01:10:42 +0000 (0:00:00.658) 0:02:21.830 ******** 2026-01-05 01:10:45.496341 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:10:45.496370 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:10:45.496383 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:10:45.496397 | orchestrator | 2026-01-05 01:10:45.496410 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-01-05 01:10:45.496440 | orchestrator | Monday 05 January 2026 01:10:44 +0000 (0:00:01.282) 0:02:23.113 ******** 2026-01-05 01:10:45.496451 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:10:45.496462 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:10:45.496472 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:10:45.496483 | orchestrator | 2026-01-05 01:10:45.496494 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:10:45.496507 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-01-05 01:10:45.496520 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-01-05 01:10:45.496531 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-01-05 01:10:45.496542 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:10:45.496553 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:10:45.496564 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:10:45.496575 | orchestrator | 2026-01-05 01:10:45.496586 | orchestrator | 2026-01-05 01:10:45.496597 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:10:45.496608 | orchestrator | Monday 05 January 2026 01:10:45 +0000 (0:00:00.918) 0:02:24.031 ******** 2026-01-05 01:10:45.496619 | orchestrator | =============================================================================== 2026-01-05 01:10:45.496630 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 31.24s 2026-01-05 01:10:45.496641 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 19.91s 2026-01-05 01:10:45.496652 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 13.72s 2026-01-05 01:10:45.496663 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.61s 2026-01-05 01:10:45.496674 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 9.56s 2026-01-05 01:10:45.496720 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.01s 2026-01-05 01:10:45.496732 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.99s 2026-01-05 01:10:45.496743 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.14s 2026-01-05 01:10:45.496754 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.97s 2026-01-05 01:10:45.496765 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.20s 2026-01-05 01:10:45.496776 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.70s 2026-01-05 01:10:45.496787 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.59s 2026-01-05 01:10:45.496798 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.55s 2026-01-05 01:10:45.496808 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.55s 2026-01-05 01:10:45.496819 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.51s 2026-01-05 01:10:45.496839 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.47s 2026-01-05 01:10:45.496850 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.28s 2026-01-05 01:10:45.496861 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.22s 2026-01-05 01:10:45.496871 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.17s 2026-01-05 01:10:45.496882 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.16s 2026-01-05 01:10:45.851666 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-01-05 01:10:45.851746 | orchestrator | + sh -c /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh 2026-01-05 01:10:48.147733 | orchestrator | 2026-01-05 01:10:48 | INFO  | Trying to run play wipe-partitions in environment custom 2026-01-05 01:10:58.326705 | orchestrator | 2026-01-05 01:10:58 | INFO  | Task 3d0ba0a8-9332-4282-9852-327cde9296d6 (wipe-partitions) was prepared for execution. 2026-01-05 01:10:58.326824 | orchestrator | 2026-01-05 01:10:58 | INFO  | It takes a moment until task 3d0ba0a8-9332-4282-9852-327cde9296d6 (wipe-partitions) has been started and output is visible here. 2026-01-05 01:11:11.716023 | orchestrator | 2026-01-05 01:11:11.716211 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-01-05 01:11:11.716245 | orchestrator | 2026-01-05 01:11:11.716264 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-01-05 01:11:11.716281 | orchestrator | Monday 05 January 2026 01:11:02 +0000 (0:00:00.129) 0:00:00.129 ******** 2026-01-05 01:11:11.716300 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:11:11.716321 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:11:11.716341 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:11:11.716359 | orchestrator | 2026-01-05 01:11:11.716378 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-01-05 01:11:11.716398 | orchestrator | Monday 05 January 2026 01:11:02 +0000 (0:00:00.657) 0:00:00.787 ******** 2026-01-05 01:11:11.716416 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:11:11.716434 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:11:11.716452 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:11:11.716471 | orchestrator | 2026-01-05 01:11:11.716491 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-01-05 01:11:11.716510 | orchestrator | Monday 05 January 2026 01:11:03 +0000 (0:00:00.378) 0:00:01.165 ******** 2026-01-05 01:11:11.716531 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:11:11.716553 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:11:11.716572 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:11:11.716592 | orchestrator | 2026-01-05 01:11:11.716612 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-01-05 01:11:11.716631 | orchestrator | Monday 05 January 2026 01:11:03 +0000 (0:00:00.654) 0:00:01.819 ******** 2026-01-05 01:11:11.716650 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:11:11.716669 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:11:11.716689 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:11:11.716709 | orchestrator | 2026-01-05 01:11:11.716729 | orchestrator | TASK [Check device availability] *********************************************** 2026-01-05 01:11:11.716749 | orchestrator | Monday 05 January 2026 01:11:04 +0000 (0:00:00.287) 0:00:02.106 ******** 2026-01-05 01:11:11.716769 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-01-05 01:11:11.716788 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-01-05 01:11:11.716808 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-01-05 01:11:11.716830 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-01-05 01:11:11.716850 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-01-05 01:11:11.716870 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-01-05 01:11:11.716883 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-01-05 01:11:11.716966 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-01-05 01:11:11.716991 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-01-05 01:11:11.717009 | orchestrator | 2026-01-05 01:11:11.717025 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-01-05 01:11:11.717039 | orchestrator | Monday 05 January 2026 01:11:05 +0000 (0:00:01.458) 0:00:03.565 ******** 2026-01-05 01:11:11.717057 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-01-05 01:11:11.717075 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-01-05 01:11:11.717092 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-01-05 01:11:11.717110 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-01-05 01:11:11.717129 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-01-05 01:11:11.717147 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-01-05 01:11:11.717165 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-01-05 01:11:11.717183 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-01-05 01:11:11.717202 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-01-05 01:11:11.717221 | orchestrator | 2026-01-05 01:11:11.717239 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-01-05 01:11:11.717257 | orchestrator | Monday 05 January 2026 01:11:07 +0000 (0:00:01.856) 0:00:05.421 ******** 2026-01-05 01:11:11.717276 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-01-05 01:11:11.717430 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-01-05 01:11:11.717470 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-01-05 01:11:11.717482 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-01-05 01:11:11.717502 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-01-05 01:11:11.717512 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-01-05 01:11:11.717522 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-01-05 01:11:11.717532 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-01-05 01:11:11.717541 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-01-05 01:11:11.717551 | orchestrator | 2026-01-05 01:11:11.717560 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-01-05 01:11:11.717570 | orchestrator | Monday 05 January 2026 01:11:09 +0000 (0:00:02.322) 0:00:07.743 ******** 2026-01-05 01:11:11.717580 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:11:11.717589 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:11:11.717599 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:11:11.717609 | orchestrator | 2026-01-05 01:11:11.717618 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-01-05 01:11:11.717628 | orchestrator | Monday 05 January 2026 01:11:10 +0000 (0:00:00.690) 0:00:08.434 ******** 2026-01-05 01:11:11.717637 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:11:11.717647 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:11:11.717656 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:11:11.717666 | orchestrator | 2026-01-05 01:11:11.717675 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:11:11.717686 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 01:11:11.717698 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 01:11:11.717732 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 01:11:11.717742 | orchestrator | 2026-01-05 01:11:11.717751 | orchestrator | 2026-01-05 01:11:11.717761 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:11:11.717771 | orchestrator | Monday 05 January 2026 01:11:11 +0000 (0:00:00.735) 0:00:09.169 ******** 2026-01-05 01:11:11.717781 | orchestrator | =============================================================================== 2026-01-05 01:11:11.717802 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.32s 2026-01-05 01:11:11.717817 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.86s 2026-01-05 01:11:11.717827 | orchestrator | Check device availability ----------------------------------------------- 1.46s 2026-01-05 01:11:11.717837 | orchestrator | Request device events from the kernel ----------------------------------- 0.74s 2026-01-05 01:11:11.717847 | orchestrator | Reload udev rules ------------------------------------------------------- 0.69s 2026-01-05 01:11:11.717856 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.66s 2026-01-05 01:11:11.717866 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.65s 2026-01-05 01:11:11.717876 | orchestrator | Remove all rook related logical devices --------------------------------- 0.38s 2026-01-05 01:11:11.717886 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.29s 2026-01-05 01:11:24.231027 | orchestrator | 2026-01-05 01:11:24 | INFO  | Task 899c16c5-582a-409d-9747-cdcaa6f494f1 (facts) was prepared for execution. 2026-01-05 01:11:24.231123 | orchestrator | 2026-01-05 01:11:24 | INFO  | It takes a moment until task 899c16c5-582a-409d-9747-cdcaa6f494f1 (facts) has been started and output is visible here. 2026-01-05 01:11:38.469134 | orchestrator | 2026-01-05 01:11:38.469241 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-01-05 01:11:38.469256 | orchestrator | 2026-01-05 01:11:38.469266 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-05 01:11:38.469277 | orchestrator | Monday 05 January 2026 01:11:28 +0000 (0:00:00.269) 0:00:00.269 ******** 2026-01-05 01:11:38.469286 | orchestrator | ok: [testbed-manager] 2026-01-05 01:11:38.469298 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:11:38.469307 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:11:38.469317 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:11:38.469322 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:11:38.469328 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:11:38.469334 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:11:38.469340 | orchestrator | 2026-01-05 01:11:38.469345 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-05 01:11:38.469351 | orchestrator | Monday 05 January 2026 01:11:29 +0000 (0:00:01.178) 0:00:01.448 ******** 2026-01-05 01:11:38.469357 | orchestrator | skipping: [testbed-manager] 2026-01-05 01:11:38.469403 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:11:38.469409 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:38.469415 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:38.469421 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:11:38.469426 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:11:38.469432 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:11:38.469438 | orchestrator | 2026-01-05 01:11:38.469444 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-05 01:11:38.469449 | orchestrator | 2026-01-05 01:11:38.469455 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-05 01:11:38.469461 | orchestrator | Monday 05 January 2026 01:11:31 +0000 (0:00:01.303) 0:00:02.751 ******** 2026-01-05 01:11:38.469467 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:11:38.469476 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:11:38.469487 | orchestrator | ok: [testbed-manager] 2026-01-05 01:11:38.469499 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:11:38.469509 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:11:38.469517 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:11:38.469525 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:11:38.469534 | orchestrator | 2026-01-05 01:11:38.469543 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-05 01:11:38.469551 | orchestrator | 2026-01-05 01:11:38.469560 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-05 01:11:38.469568 | orchestrator | Monday 05 January 2026 01:11:37 +0000 (0:00:06.328) 0:00:09.079 ******** 2026-01-05 01:11:38.469603 | orchestrator | skipping: [testbed-manager] 2026-01-05 01:11:38.469614 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:11:38.469623 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:11:38.469632 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:11:38.469642 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:11:38.469648 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:11:38.469653 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:11:38.469658 | orchestrator | 2026-01-05 01:11:38.469664 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:11:38.469670 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 01:11:38.469678 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 01:11:38.469684 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 01:11:38.469690 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 01:11:38.469697 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 01:11:38.469765 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 01:11:38.469772 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 01:11:38.469778 | orchestrator | 2026-01-05 01:11:38.469785 | orchestrator | 2026-01-05 01:11:38.469791 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:11:38.469798 | orchestrator | Monday 05 January 2026 01:11:38 +0000 (0:00:00.599) 0:00:09.679 ******** 2026-01-05 01:11:38.469819 | orchestrator | =============================================================================== 2026-01-05 01:11:38.469826 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.33s 2026-01-05 01:11:38.469832 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.30s 2026-01-05 01:11:38.469839 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.18s 2026-01-05 01:11:38.469845 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.60s 2026-01-05 01:11:40.935135 | orchestrator | 2026-01-05 01:11:40 | INFO  | Task f0557d6f-ba73-4dca-af51-139a8c28bbe6 (ceph-configure-lvm-volumes) was prepared for execution. 2026-01-05 01:11:40.935250 | orchestrator | 2026-01-05 01:11:40 | INFO  | It takes a moment until task f0557d6f-ba73-4dca-af51-139a8c28bbe6 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-01-05 01:11:54.074995 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-05 01:11:54.075749 | orchestrator | 2.16.14 2026-01-05 01:11:54.075791 | orchestrator | 2026-01-05 01:11:54.075801 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-05 01:11:54.075809 | orchestrator | 2026-01-05 01:11:54.075816 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-05 01:11:54.075824 | orchestrator | Monday 05 January 2026 01:11:45 +0000 (0:00:00.368) 0:00:00.368 ******** 2026-01-05 01:11:54.075832 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-05 01:11:54.075851 | orchestrator | 2026-01-05 01:11:54.075859 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-05 01:11:54.075866 | orchestrator | Monday 05 January 2026 01:11:45 +0000 (0:00:00.293) 0:00:00.661 ******** 2026-01-05 01:11:54.075873 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:11:54.075898 | orchestrator | 2026-01-05 01:11:54.075905 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:11:54.075912 | orchestrator | Monday 05 January 2026 01:11:46 +0000 (0:00:00.248) 0:00:00.910 ******** 2026-01-05 01:11:54.075919 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-01-05 01:11:54.075925 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-01-05 01:11:54.075930 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-01-05 01:11:54.075936 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-01-05 01:11:54.075942 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-01-05 01:11:54.075947 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-01-05 01:11:54.075953 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-01-05 01:11:54.075959 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-01-05 01:11:54.075965 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-01-05 01:11:54.075972 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-01-05 01:11:54.075978 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-01-05 01:11:54.075985 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-01-05 01:11:54.075991 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-01-05 01:11:54.075997 | orchestrator | 2026-01-05 01:11:54.076003 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:11:54.076009 | orchestrator | Monday 05 January 2026 01:11:46 +0000 (0:00:00.522) 0:00:01.432 ******** 2026-01-05 01:11:54.076015 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:11:54.076022 | orchestrator | 2026-01-05 01:11:54.076028 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:11:54.076033 | orchestrator | Monday 05 January 2026 01:11:46 +0000 (0:00:00.219) 0:00:01.652 ******** 2026-01-05 01:11:54.076039 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:11:54.076044 | orchestrator | 2026-01-05 01:11:54.076050 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:11:54.076056 | orchestrator | Monday 05 January 2026 01:11:47 +0000 (0:00:00.224) 0:00:01.876 ******** 2026-01-05 01:11:54.076063 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:11:54.076069 | orchestrator | 2026-01-05 01:11:54.076076 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:11:54.076082 | orchestrator | Monday 05 January 2026 01:11:47 +0000 (0:00:00.218) 0:00:02.095 ******** 2026-01-05 01:11:54.076089 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:11:54.076095 | orchestrator | 2026-01-05 01:11:54.076101 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:11:54.076108 | orchestrator | Monday 05 January 2026 01:11:47 +0000 (0:00:00.239) 0:00:02.334 ******** 2026-01-05 01:11:54.076114 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:11:54.076121 | orchestrator | 2026-01-05 01:11:54.076127 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:11:54.076134 | orchestrator | Monday 05 January 2026 01:11:47 +0000 (0:00:00.227) 0:00:02.561 ******** 2026-01-05 01:11:54.076140 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:11:54.076146 | orchestrator | 2026-01-05 01:11:54.076153 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:11:54.076159 | orchestrator | Monday 05 January 2026 01:11:48 +0000 (0:00:00.226) 0:00:02.787 ******** 2026-01-05 01:11:54.076174 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:11:54.076185 | orchestrator | 2026-01-05 01:11:54.076189 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:11:54.076193 | orchestrator | Monday 05 January 2026 01:11:48 +0000 (0:00:00.223) 0:00:03.011 ******** 2026-01-05 01:11:54.076197 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:11:54.076200 | orchestrator | 2026-01-05 01:11:54.076204 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:11:54.076208 | orchestrator | Monday 05 January 2026 01:11:48 +0000 (0:00:00.209) 0:00:03.220 ******** 2026-01-05 01:11:54.076212 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_0b0d1c85-8aad-4201-aadd-214ecf9ccf0b) 2026-01-05 01:11:54.076218 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_0b0d1c85-8aad-4201-aadd-214ecf9ccf0b) 2026-01-05 01:11:54.076221 | orchestrator | 2026-01-05 01:11:54.076226 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:11:54.076245 | orchestrator | Monday 05 January 2026 01:11:49 +0000 (0:00:00.685) 0:00:03.905 ******** 2026-01-05 01:11:54.076249 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_bcde85c0-b124-4268-b34b-cc4a07cfe72d) 2026-01-05 01:11:54.076253 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_bcde85c0-b124-4268-b34b-cc4a07cfe72d) 2026-01-05 01:11:54.076257 | orchestrator | 2026-01-05 01:11:54.076260 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:11:54.076264 | orchestrator | Monday 05 January 2026 01:11:49 +0000 (0:00:00.724) 0:00:04.630 ******** 2026-01-05 01:11:54.076268 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_99050707-7ba3-43f8-b640-7ac26fbd844b) 2026-01-05 01:11:54.076272 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_99050707-7ba3-43f8-b640-7ac26fbd844b) 2026-01-05 01:11:54.076275 | orchestrator | 2026-01-05 01:11:54.076279 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:11:54.076283 | orchestrator | Monday 05 January 2026 01:11:50 +0000 (0:00:00.946) 0:00:05.576 ******** 2026-01-05 01:11:54.076287 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ca851d29-aa00-48c4-a2d0-a646814f4a41) 2026-01-05 01:11:54.076290 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ca851d29-aa00-48c4-a2d0-a646814f4a41) 2026-01-05 01:11:54.076294 | orchestrator | 2026-01-05 01:11:54.076298 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:11:54.076301 | orchestrator | Monday 05 January 2026 01:11:51 +0000 (0:00:00.456) 0:00:06.032 ******** 2026-01-05 01:11:54.076305 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-05 01:11:54.076309 | orchestrator | 2026-01-05 01:11:54.076313 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:11:54.076317 | orchestrator | Monday 05 January 2026 01:11:51 +0000 (0:00:00.360) 0:00:06.393 ******** 2026-01-05 01:11:54.076320 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-01-05 01:11:54.076324 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-01-05 01:11:54.076328 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-01-05 01:11:54.076332 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-01-05 01:11:54.076335 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-01-05 01:11:54.076339 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-01-05 01:11:54.076343 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-01-05 01:11:54.076346 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-01-05 01:11:54.076350 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-01-05 01:11:54.076357 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-01-05 01:11:54.076361 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-01-05 01:11:54.076365 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-01-05 01:11:54.076369 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-01-05 01:11:54.076372 | orchestrator | 2026-01-05 01:11:54.076376 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:11:54.076380 | orchestrator | Monday 05 January 2026 01:11:52 +0000 (0:00:00.468) 0:00:06.861 ******** 2026-01-05 01:11:54.076383 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:11:54.076387 | orchestrator | 2026-01-05 01:11:54.076391 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:11:54.076395 | orchestrator | Monday 05 January 2026 01:11:52 +0000 (0:00:00.204) 0:00:07.066 ******** 2026-01-05 01:11:54.076399 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:11:54.076402 | orchestrator | 2026-01-05 01:11:54.076406 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:11:54.076410 | orchestrator | Monday 05 January 2026 01:11:52 +0000 (0:00:00.218) 0:00:07.284 ******** 2026-01-05 01:11:54.076414 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:11:54.076417 | orchestrator | 2026-01-05 01:11:54.076421 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:11:54.076429 | orchestrator | Monday 05 January 2026 01:11:52 +0000 (0:00:00.223) 0:00:07.508 ******** 2026-01-05 01:11:54.076435 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:11:54.076440 | orchestrator | 2026-01-05 01:11:54.076445 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:11:54.076450 | orchestrator | Monday 05 January 2026 01:11:52 +0000 (0:00:00.227) 0:00:07.736 ******** 2026-01-05 01:11:54.076455 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:11:54.076461 | orchestrator | 2026-01-05 01:11:54.076470 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:11:54.076477 | orchestrator | Monday 05 January 2026 01:11:53 +0000 (0:00:00.216) 0:00:07.952 ******** 2026-01-05 01:11:54.076483 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:11:54.076489 | orchestrator | 2026-01-05 01:11:54.076495 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:11:54.076500 | orchestrator | Monday 05 January 2026 01:11:53 +0000 (0:00:00.643) 0:00:08.596 ******** 2026-01-05 01:11:54.076506 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:11:54.076512 | orchestrator | 2026-01-05 01:11:54.076522 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:12:01.838429 | orchestrator | Monday 05 January 2026 01:11:54 +0000 (0:00:00.220) 0:00:08.817 ******** 2026-01-05 01:12:01.838555 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:12:01.838576 | orchestrator | 2026-01-05 01:12:01.838591 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:12:01.838604 | orchestrator | Monday 05 January 2026 01:11:54 +0000 (0:00:00.229) 0:00:09.046 ******** 2026-01-05 01:12:01.838617 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-01-05 01:12:01.838630 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-01-05 01:12:01.838643 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-01-05 01:12:01.838656 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-01-05 01:12:01.838668 | orchestrator | 2026-01-05 01:12:01.838680 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:12:01.838693 | orchestrator | Monday 05 January 2026 01:11:54 +0000 (0:00:00.698) 0:00:09.745 ******** 2026-01-05 01:12:01.838704 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:12:01.838717 | orchestrator | 2026-01-05 01:12:01.838729 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:12:01.838741 | orchestrator | Monday 05 January 2026 01:11:55 +0000 (0:00:00.235) 0:00:09.980 ******** 2026-01-05 01:12:01.838782 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:12:01.838796 | orchestrator | 2026-01-05 01:12:01.838808 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:12:01.838820 | orchestrator | Monday 05 January 2026 01:11:55 +0000 (0:00:00.234) 0:00:10.214 ******** 2026-01-05 01:12:01.838897 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:12:01.838911 | orchestrator | 2026-01-05 01:12:01.838923 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:12:01.838936 | orchestrator | Monday 05 January 2026 01:11:55 +0000 (0:00:00.233) 0:00:10.447 ******** 2026-01-05 01:12:01.838949 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:12:01.838962 | orchestrator | 2026-01-05 01:12:01.838975 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-05 01:12:01.838988 | orchestrator | Monday 05 January 2026 01:11:55 +0000 (0:00:00.225) 0:00:10.673 ******** 2026-01-05 01:12:01.839001 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-01-05 01:12:01.839015 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-01-05 01:12:01.839027 | orchestrator | 2026-01-05 01:12:01.839040 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-05 01:12:01.839052 | orchestrator | Monday 05 January 2026 01:11:56 +0000 (0:00:00.182) 0:00:10.856 ******** 2026-01-05 01:12:01.839065 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:12:01.839077 | orchestrator | 2026-01-05 01:12:01.839090 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-05 01:12:01.839103 | orchestrator | Monday 05 January 2026 01:11:56 +0000 (0:00:00.152) 0:00:11.008 ******** 2026-01-05 01:12:01.839115 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:12:01.839128 | orchestrator | 2026-01-05 01:12:01.839140 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-05 01:12:01.839153 | orchestrator | Monday 05 January 2026 01:11:56 +0000 (0:00:00.161) 0:00:11.169 ******** 2026-01-05 01:12:01.839166 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:12:01.839179 | orchestrator | 2026-01-05 01:12:01.839192 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-05 01:12:01.839205 | orchestrator | Monday 05 January 2026 01:11:56 +0000 (0:00:00.356) 0:00:11.526 ******** 2026-01-05 01:12:01.839217 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:12:01.839229 | orchestrator | 2026-01-05 01:12:01.839241 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-05 01:12:01.839253 | orchestrator | Monday 05 January 2026 01:11:56 +0000 (0:00:00.155) 0:00:11.681 ******** 2026-01-05 01:12:01.839266 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9b63b326-8bb9-546b-aabb-a628fef076ec'}}) 2026-01-05 01:12:01.839278 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b6ae7fca-c2f2-5e20-af6f-426bd4b4cc4c'}}) 2026-01-05 01:12:01.839290 | orchestrator | 2026-01-05 01:12:01.839302 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-05 01:12:01.839314 | orchestrator | Monday 05 January 2026 01:11:57 +0000 (0:00:00.194) 0:00:11.876 ******** 2026-01-05 01:12:01.839328 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9b63b326-8bb9-546b-aabb-a628fef076ec'}})  2026-01-05 01:12:01.839342 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b6ae7fca-c2f2-5e20-af6f-426bd4b4cc4c'}})  2026-01-05 01:12:01.839353 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:12:01.839365 | orchestrator | 2026-01-05 01:12:01.839394 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-05 01:12:01.839407 | orchestrator | Monday 05 January 2026 01:11:57 +0000 (0:00:00.158) 0:00:12.035 ******** 2026-01-05 01:12:01.839418 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9b63b326-8bb9-546b-aabb-a628fef076ec'}})  2026-01-05 01:12:01.839441 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b6ae7fca-c2f2-5e20-af6f-426bd4b4cc4c'}})  2026-01-05 01:12:01.839453 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:12:01.839465 | orchestrator | 2026-01-05 01:12:01.839478 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-05 01:12:01.839490 | orchestrator | Monday 05 January 2026 01:11:57 +0000 (0:00:00.164) 0:00:12.200 ******** 2026-01-05 01:12:01.839502 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9b63b326-8bb9-546b-aabb-a628fef076ec'}})  2026-01-05 01:12:01.839535 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b6ae7fca-c2f2-5e20-af6f-426bd4b4cc4c'}})  2026-01-05 01:12:01.839548 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:12:01.839561 | orchestrator | 2026-01-05 01:12:01.839573 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-05 01:12:01.839585 | orchestrator | Monday 05 January 2026 01:11:57 +0000 (0:00:00.155) 0:00:12.356 ******** 2026-01-05 01:12:01.839597 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:12:01.839610 | orchestrator | 2026-01-05 01:12:01.839621 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-05 01:12:01.839634 | orchestrator | Monday 05 January 2026 01:11:57 +0000 (0:00:00.150) 0:00:12.506 ******** 2026-01-05 01:12:01.839645 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:12:01.839656 | orchestrator | 2026-01-05 01:12:01.839667 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-05 01:12:01.839679 | orchestrator | Monday 05 January 2026 01:11:57 +0000 (0:00:00.175) 0:00:12.681 ******** 2026-01-05 01:12:01.839691 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:12:01.839702 | orchestrator | 2026-01-05 01:12:01.839714 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-05 01:12:01.839726 | orchestrator | Monday 05 January 2026 01:11:58 +0000 (0:00:00.189) 0:00:12.871 ******** 2026-01-05 01:12:01.839738 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:12:01.839750 | orchestrator | 2026-01-05 01:12:01.839760 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-05 01:12:01.839770 | orchestrator | Monday 05 January 2026 01:11:58 +0000 (0:00:00.147) 0:00:13.018 ******** 2026-01-05 01:12:01.839782 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:12:01.839792 | orchestrator | 2026-01-05 01:12:01.839802 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-05 01:12:01.839813 | orchestrator | Monday 05 January 2026 01:11:58 +0000 (0:00:00.151) 0:00:13.170 ******** 2026-01-05 01:12:01.839824 | orchestrator | ok: [testbed-node-3] => { 2026-01-05 01:12:01.839904 | orchestrator |  "ceph_osd_devices": { 2026-01-05 01:12:01.839915 | orchestrator |  "sdb": { 2026-01-05 01:12:01.839926 | orchestrator |  "osd_lvm_uuid": "9b63b326-8bb9-546b-aabb-a628fef076ec" 2026-01-05 01:12:01.839938 | orchestrator |  }, 2026-01-05 01:12:01.839949 | orchestrator |  "sdc": { 2026-01-05 01:12:01.839961 | orchestrator |  "osd_lvm_uuid": "b6ae7fca-c2f2-5e20-af6f-426bd4b4cc4c" 2026-01-05 01:12:01.839973 | orchestrator |  } 2026-01-05 01:12:01.839984 | orchestrator |  } 2026-01-05 01:12:01.839996 | orchestrator | } 2026-01-05 01:12:01.840008 | orchestrator | 2026-01-05 01:12:01.840020 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-05 01:12:01.840032 | orchestrator | Monday 05 January 2026 01:11:58 +0000 (0:00:00.372) 0:00:13.542 ******** 2026-01-05 01:12:01.840045 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:12:01.840056 | orchestrator | 2026-01-05 01:12:01.840069 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-05 01:12:01.840081 | orchestrator | Monday 05 January 2026 01:11:58 +0000 (0:00:00.148) 0:00:13.691 ******** 2026-01-05 01:12:01.840093 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:12:01.840105 | orchestrator | 2026-01-05 01:12:01.840117 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-05 01:12:01.840139 | orchestrator | Monday 05 January 2026 01:11:59 +0000 (0:00:00.130) 0:00:13.822 ******** 2026-01-05 01:12:01.840151 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:12:01.840162 | orchestrator | 2026-01-05 01:12:01.840174 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-05 01:12:01.840185 | orchestrator | Monday 05 January 2026 01:11:59 +0000 (0:00:00.144) 0:00:13.966 ******** 2026-01-05 01:12:01.840197 | orchestrator | changed: [testbed-node-3] => { 2026-01-05 01:12:01.840209 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-05 01:12:01.840221 | orchestrator |  "ceph_osd_devices": { 2026-01-05 01:12:01.840233 | orchestrator |  "sdb": { 2026-01-05 01:12:01.840245 | orchestrator |  "osd_lvm_uuid": "9b63b326-8bb9-546b-aabb-a628fef076ec" 2026-01-05 01:12:01.840256 | orchestrator |  }, 2026-01-05 01:12:01.840269 | orchestrator |  "sdc": { 2026-01-05 01:12:01.840281 | orchestrator |  "osd_lvm_uuid": "b6ae7fca-c2f2-5e20-af6f-426bd4b4cc4c" 2026-01-05 01:12:01.840293 | orchestrator |  } 2026-01-05 01:12:01.840305 | orchestrator |  }, 2026-01-05 01:12:01.840317 | orchestrator |  "lvm_volumes": [ 2026-01-05 01:12:01.840329 | orchestrator |  { 2026-01-05 01:12:01.840341 | orchestrator |  "data": "osd-block-9b63b326-8bb9-546b-aabb-a628fef076ec", 2026-01-05 01:12:01.840353 | orchestrator |  "data_vg": "ceph-9b63b326-8bb9-546b-aabb-a628fef076ec" 2026-01-05 01:12:01.840366 | orchestrator |  }, 2026-01-05 01:12:01.840377 | orchestrator |  { 2026-01-05 01:12:01.840390 | orchestrator |  "data": "osd-block-b6ae7fca-c2f2-5e20-af6f-426bd4b4cc4c", 2026-01-05 01:12:01.840402 | orchestrator |  "data_vg": "ceph-b6ae7fca-c2f2-5e20-af6f-426bd4b4cc4c" 2026-01-05 01:12:01.840413 | orchestrator |  } 2026-01-05 01:12:01.840426 | orchestrator |  ] 2026-01-05 01:12:01.840438 | orchestrator |  } 2026-01-05 01:12:01.840448 | orchestrator | } 2026-01-05 01:12:01.840459 | orchestrator | 2026-01-05 01:12:01.840479 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-05 01:12:01.840492 | orchestrator | Monday 05 January 2026 01:11:59 +0000 (0:00:00.238) 0:00:14.204 ******** 2026-01-05 01:12:01.840503 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-05 01:12:01.840515 | orchestrator | 2026-01-05 01:12:01.840526 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-05 01:12:01.840538 | orchestrator | 2026-01-05 01:12:01.840550 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-05 01:12:01.840562 | orchestrator | Monday 05 January 2026 01:12:01 +0000 (0:00:01.865) 0:00:16.070 ******** 2026-01-05 01:12:01.840573 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-05 01:12:01.840586 | orchestrator | 2026-01-05 01:12:01.840598 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-05 01:12:01.840610 | orchestrator | Monday 05 January 2026 01:12:01 +0000 (0:00:00.261) 0:00:16.331 ******** 2026-01-05 01:12:01.840621 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:12:01.840633 | orchestrator | 2026-01-05 01:12:01.840659 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:12:11.159325 | orchestrator | Monday 05 January 2026 01:12:01 +0000 (0:00:00.253) 0:00:16.585 ******** 2026-01-05 01:12:11.159457 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-01-05 01:12:11.159478 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-01-05 01:12:11.159487 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-01-05 01:12:11.159497 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-01-05 01:12:11.159506 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-01-05 01:12:11.159515 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-01-05 01:12:11.159548 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-01-05 01:12:11.159557 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-01-05 01:12:11.159566 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-01-05 01:12:11.159575 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-01-05 01:12:11.159584 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-01-05 01:12:11.159593 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-01-05 01:12:11.159601 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-01-05 01:12:11.159610 | orchestrator | 2026-01-05 01:12:11.159620 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:12:11.159629 | orchestrator | Monday 05 January 2026 01:12:02 +0000 (0:00:00.874) 0:00:17.460 ******** 2026-01-05 01:12:11.159638 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:12:11.159648 | orchestrator | 2026-01-05 01:12:11.159657 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:12:11.159665 | orchestrator | Monday 05 January 2026 01:12:02 +0000 (0:00:00.223) 0:00:17.683 ******** 2026-01-05 01:12:11.159674 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:12:11.159683 | orchestrator | 2026-01-05 01:12:11.159691 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:12:11.159700 | orchestrator | Monday 05 January 2026 01:12:03 +0000 (0:00:00.221) 0:00:17.905 ******** 2026-01-05 01:12:11.159709 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:12:11.159717 | orchestrator | 2026-01-05 01:12:11.159726 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:12:11.159735 | orchestrator | Monday 05 January 2026 01:12:03 +0000 (0:00:00.223) 0:00:18.129 ******** 2026-01-05 01:12:11.159743 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:12:11.159752 | orchestrator | 2026-01-05 01:12:11.159760 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:12:11.159769 | orchestrator | Monday 05 January 2026 01:12:03 +0000 (0:00:00.224) 0:00:18.353 ******** 2026-01-05 01:12:11.159778 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:12:11.159787 | orchestrator | 2026-01-05 01:12:11.159796 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:12:11.159805 | orchestrator | Monday 05 January 2026 01:12:03 +0000 (0:00:00.216) 0:00:18.569 ******** 2026-01-05 01:12:11.159900 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:12:11.159916 | orchestrator | 2026-01-05 01:12:11.159925 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:12:11.159935 | orchestrator | Monday 05 January 2026 01:12:04 +0000 (0:00:00.218) 0:00:18.788 ******** 2026-01-05 01:12:11.159949 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:12:11.159965 | orchestrator | 2026-01-05 01:12:11.159985 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:12:11.159998 | orchestrator | Monday 05 January 2026 01:12:04 +0000 (0:00:00.226) 0:00:19.014 ******** 2026-01-05 01:12:11.160012 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:12:11.160024 | orchestrator | 2026-01-05 01:12:11.160036 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:12:11.160049 | orchestrator | Monday 05 January 2026 01:12:04 +0000 (0:00:00.227) 0:00:19.242 ******** 2026-01-05 01:12:11.160061 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_3250b0f8-cf47-4b18-9931-22a1ebe34c49) 2026-01-05 01:12:11.160077 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_3250b0f8-cf47-4b18-9931-22a1ebe34c49) 2026-01-05 01:12:11.160090 | orchestrator | 2026-01-05 01:12:11.160121 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:12:11.160147 | orchestrator | Monday 05 January 2026 01:12:05 +0000 (0:00:00.653) 0:00:19.896 ******** 2026-01-05 01:12:11.160161 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9f2df327-5b12-4442-ac27-592210953f70) 2026-01-05 01:12:11.160174 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9f2df327-5b12-4442-ac27-592210953f70) 2026-01-05 01:12:11.160187 | orchestrator | 2026-01-05 01:12:11.160199 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:12:11.160211 | orchestrator | Monday 05 January 2026 01:12:05 +0000 (0:00:00.684) 0:00:20.581 ******** 2026-01-05 01:12:11.160225 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_ead21d4d-eccd-4cd4-b0bf-ce9a2f7ae522) 2026-01-05 01:12:11.160237 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_ead21d4d-eccd-4cd4-b0bf-ce9a2f7ae522) 2026-01-05 01:12:11.160251 | orchestrator | 2026-01-05 01:12:11.160266 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:12:11.160304 | orchestrator | Monday 05 January 2026 01:12:06 +0000 (0:00:00.937) 0:00:21.518 ******** 2026-01-05 01:12:11.160314 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_6e0b145f-2bfd-4824-bc37-4d4082c6f3f3) 2026-01-05 01:12:11.160323 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_6e0b145f-2bfd-4824-bc37-4d4082c6f3f3) 2026-01-05 01:12:11.160331 | orchestrator | 2026-01-05 01:12:11.160339 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:12:11.160347 | orchestrator | Monday 05 January 2026 01:12:07 +0000 (0:00:00.454) 0:00:21.972 ******** 2026-01-05 01:12:11.160355 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-05 01:12:11.160363 | orchestrator | 2026-01-05 01:12:11.160371 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:12:11.160378 | orchestrator | Monday 05 January 2026 01:12:07 +0000 (0:00:00.358) 0:00:22.330 ******** 2026-01-05 01:12:11.160386 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-01-05 01:12:11.160394 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-01-05 01:12:11.160402 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-01-05 01:12:11.160409 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-01-05 01:12:11.160417 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-01-05 01:12:11.160425 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-01-05 01:12:11.160432 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-01-05 01:12:11.160440 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-01-05 01:12:11.160448 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-01-05 01:12:11.160456 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-01-05 01:12:11.160464 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-01-05 01:12:11.160471 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-01-05 01:12:11.160479 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-01-05 01:12:11.160487 | orchestrator | 2026-01-05 01:12:11.160494 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:12:11.160502 | orchestrator | Monday 05 January 2026 01:12:08 +0000 (0:00:00.419) 0:00:22.750 ******** 2026-01-05 01:12:11.160510 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:12:11.160518 | orchestrator | 2026-01-05 01:12:11.160526 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:12:11.160542 | orchestrator | Monday 05 January 2026 01:12:08 +0000 (0:00:00.225) 0:00:22.975 ******** 2026-01-05 01:12:11.160550 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:12:11.160558 | orchestrator | 2026-01-05 01:12:11.160565 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:12:11.160573 | orchestrator | Monday 05 January 2026 01:12:08 +0000 (0:00:00.216) 0:00:23.192 ******** 2026-01-05 01:12:11.160581 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:12:11.160589 | orchestrator | 2026-01-05 01:12:11.160597 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:12:11.160605 | orchestrator | Monday 05 January 2026 01:12:08 +0000 (0:00:00.232) 0:00:23.424 ******** 2026-01-05 01:12:11.160612 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:12:11.160620 | orchestrator | 2026-01-05 01:12:11.160628 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:12:11.160636 | orchestrator | Monday 05 January 2026 01:12:08 +0000 (0:00:00.215) 0:00:23.640 ******** 2026-01-05 01:12:11.160644 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:12:11.160651 | orchestrator | 2026-01-05 01:12:11.160659 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:12:11.160667 | orchestrator | Monday 05 January 2026 01:12:09 +0000 (0:00:00.234) 0:00:23.874 ******** 2026-01-05 01:12:11.160675 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:12:11.160683 | orchestrator | 2026-01-05 01:12:11.160690 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:12:11.160704 | orchestrator | Monday 05 January 2026 01:12:09 +0000 (0:00:00.204) 0:00:24.079 ******** 2026-01-05 01:12:11.160713 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:12:11.160720 | orchestrator | 2026-01-05 01:12:11.160728 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:12:11.160736 | orchestrator | Monday 05 January 2026 01:12:10 +0000 (0:00:00.680) 0:00:24.759 ******** 2026-01-05 01:12:11.160744 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:12:11.160752 | orchestrator | 2026-01-05 01:12:11.160760 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:12:11.160767 | orchestrator | Monday 05 January 2026 01:12:10 +0000 (0:00:00.222) 0:00:24.982 ******** 2026-01-05 01:12:11.160775 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-01-05 01:12:11.160784 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-01-05 01:12:11.160793 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-01-05 01:12:11.160800 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-01-05 01:12:11.160808 | orchestrator | 2026-01-05 01:12:11.160842 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:12:11.160851 | orchestrator | Monday 05 January 2026 01:12:10 +0000 (0:00:00.697) 0:00:25.680 ******** 2026-01-05 01:12:11.160859 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:12:17.450674 | orchestrator | 2026-01-05 01:12:17.450797 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:12:17.450877 | orchestrator | Monday 05 January 2026 01:12:11 +0000 (0:00:00.226) 0:00:25.906 ******** 2026-01-05 01:12:17.450890 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:12:17.450904 | orchestrator | 2026-01-05 01:12:17.450914 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:12:17.450925 | orchestrator | Monday 05 January 2026 01:12:11 +0000 (0:00:00.217) 0:00:26.123 ******** 2026-01-05 01:12:17.450936 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:12:17.450947 | orchestrator | 2026-01-05 01:12:17.450957 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:12:17.450967 | orchestrator | Monday 05 January 2026 01:12:11 +0000 (0:00:00.230) 0:00:26.354 ******** 2026-01-05 01:12:17.450978 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:12:17.450989 | orchestrator | 2026-01-05 01:12:17.451000 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-05 01:12:17.451011 | orchestrator | Monday 05 January 2026 01:12:11 +0000 (0:00:00.220) 0:00:26.575 ******** 2026-01-05 01:12:17.451052 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-01-05 01:12:17.451064 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-01-05 01:12:17.451073 | orchestrator | 2026-01-05 01:12:17.451082 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-05 01:12:17.451093 | orchestrator | Monday 05 January 2026 01:12:12 +0000 (0:00:00.183) 0:00:26.759 ******** 2026-01-05 01:12:17.451107 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:12:17.451117 | orchestrator | 2026-01-05 01:12:17.451127 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-05 01:12:17.451137 | orchestrator | Monday 05 January 2026 01:12:12 +0000 (0:00:00.148) 0:00:26.908 ******** 2026-01-05 01:12:17.451147 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:12:17.451157 | orchestrator | 2026-01-05 01:12:17.451167 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-05 01:12:17.451177 | orchestrator | Monday 05 January 2026 01:12:12 +0000 (0:00:00.135) 0:00:27.043 ******** 2026-01-05 01:12:17.451187 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:12:17.451198 | orchestrator | 2026-01-05 01:12:17.451209 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-05 01:12:17.451219 | orchestrator | Monday 05 January 2026 01:12:12 +0000 (0:00:00.138) 0:00:27.182 ******** 2026-01-05 01:12:17.451230 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:12:17.451242 | orchestrator | 2026-01-05 01:12:17.451252 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-05 01:12:17.451263 | orchestrator | Monday 05 January 2026 01:12:12 +0000 (0:00:00.147) 0:00:27.330 ******** 2026-01-05 01:12:17.451274 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'cc420972-ce44-5a44-a5a6-a707e77471c5'}}) 2026-01-05 01:12:17.451286 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '62cfaa39-e4fc-5ede-b6ae-ee7ea3f2ad3e'}}) 2026-01-05 01:12:17.451297 | orchestrator | 2026-01-05 01:12:17.451307 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-05 01:12:17.451317 | orchestrator | Monday 05 January 2026 01:12:12 +0000 (0:00:00.418) 0:00:27.748 ******** 2026-01-05 01:12:17.451330 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'cc420972-ce44-5a44-a5a6-a707e77471c5'}})  2026-01-05 01:12:17.451345 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '62cfaa39-e4fc-5ede-b6ae-ee7ea3f2ad3e'}})  2026-01-05 01:12:17.451355 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:12:17.451365 | orchestrator | 2026-01-05 01:12:17.451375 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-05 01:12:17.451385 | orchestrator | Monday 05 January 2026 01:12:13 +0000 (0:00:00.182) 0:00:27.930 ******** 2026-01-05 01:12:17.451396 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'cc420972-ce44-5a44-a5a6-a707e77471c5'}})  2026-01-05 01:12:17.451407 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '62cfaa39-e4fc-5ede-b6ae-ee7ea3f2ad3e'}})  2026-01-05 01:12:17.451430 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:12:17.451441 | orchestrator | 2026-01-05 01:12:17.451451 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-05 01:12:17.451462 | orchestrator | Monday 05 January 2026 01:12:13 +0000 (0:00:00.171) 0:00:28.102 ******** 2026-01-05 01:12:17.451489 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'cc420972-ce44-5a44-a5a6-a707e77471c5'}})  2026-01-05 01:12:17.451501 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '62cfaa39-e4fc-5ede-b6ae-ee7ea3f2ad3e'}})  2026-01-05 01:12:17.451512 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:12:17.451522 | orchestrator | 2026-01-05 01:12:17.451531 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-05 01:12:17.451553 | orchestrator | Monday 05 January 2026 01:12:13 +0000 (0:00:00.196) 0:00:28.298 ******** 2026-01-05 01:12:17.451564 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:12:17.451574 | orchestrator | 2026-01-05 01:12:17.451584 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-05 01:12:17.451594 | orchestrator | Monday 05 January 2026 01:12:13 +0000 (0:00:00.146) 0:00:28.445 ******** 2026-01-05 01:12:17.451605 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:12:17.451615 | orchestrator | 2026-01-05 01:12:17.451626 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-05 01:12:17.451637 | orchestrator | Monday 05 January 2026 01:12:13 +0000 (0:00:00.159) 0:00:28.605 ******** 2026-01-05 01:12:17.451683 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:12:17.451695 | orchestrator | 2026-01-05 01:12:17.451706 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-05 01:12:17.451717 | orchestrator | Monday 05 January 2026 01:12:14 +0000 (0:00:00.152) 0:00:28.757 ******** 2026-01-05 01:12:17.451728 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:12:17.451739 | orchestrator | 2026-01-05 01:12:17.451749 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-05 01:12:17.451760 | orchestrator | Monday 05 January 2026 01:12:14 +0000 (0:00:00.128) 0:00:28.886 ******** 2026-01-05 01:12:17.451771 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:12:17.451782 | orchestrator | 2026-01-05 01:12:17.451792 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-05 01:12:17.451803 | orchestrator | Monday 05 January 2026 01:12:14 +0000 (0:00:00.138) 0:00:29.024 ******** 2026-01-05 01:12:17.451838 | orchestrator | ok: [testbed-node-4] => { 2026-01-05 01:12:17.451848 | orchestrator |  "ceph_osd_devices": { 2026-01-05 01:12:17.451859 | orchestrator |  "sdb": { 2026-01-05 01:12:17.451869 | orchestrator |  "osd_lvm_uuid": "cc420972-ce44-5a44-a5a6-a707e77471c5" 2026-01-05 01:12:17.451879 | orchestrator |  }, 2026-01-05 01:12:17.451889 | orchestrator |  "sdc": { 2026-01-05 01:12:17.451899 | orchestrator |  "osd_lvm_uuid": "62cfaa39-e4fc-5ede-b6ae-ee7ea3f2ad3e" 2026-01-05 01:12:17.451910 | orchestrator |  } 2026-01-05 01:12:17.451921 | orchestrator |  } 2026-01-05 01:12:17.451932 | orchestrator | } 2026-01-05 01:12:17.451943 | orchestrator | 2026-01-05 01:12:17.451953 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-05 01:12:17.451964 | orchestrator | Monday 05 January 2026 01:12:14 +0000 (0:00:00.143) 0:00:29.168 ******** 2026-01-05 01:12:17.451974 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:12:17.451984 | orchestrator | 2026-01-05 01:12:17.451995 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-05 01:12:17.452006 | orchestrator | Monday 05 January 2026 01:12:14 +0000 (0:00:00.158) 0:00:29.326 ******** 2026-01-05 01:12:17.452015 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:12:17.452027 | orchestrator | 2026-01-05 01:12:17.452038 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-05 01:12:17.452049 | orchestrator | Monday 05 January 2026 01:12:14 +0000 (0:00:00.143) 0:00:29.470 ******** 2026-01-05 01:12:17.452060 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:12:17.452072 | orchestrator | 2026-01-05 01:12:17.452083 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-05 01:12:17.452095 | orchestrator | Monday 05 January 2026 01:12:15 +0000 (0:00:00.362) 0:00:29.833 ******** 2026-01-05 01:12:17.452106 | orchestrator | changed: [testbed-node-4] => { 2026-01-05 01:12:17.452118 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-05 01:12:17.452129 | orchestrator |  "ceph_osd_devices": { 2026-01-05 01:12:17.452140 | orchestrator |  "sdb": { 2026-01-05 01:12:17.452151 | orchestrator |  "osd_lvm_uuid": "cc420972-ce44-5a44-a5a6-a707e77471c5" 2026-01-05 01:12:17.452162 | orchestrator |  }, 2026-01-05 01:12:17.452173 | orchestrator |  "sdc": { 2026-01-05 01:12:17.452194 | orchestrator |  "osd_lvm_uuid": "62cfaa39-e4fc-5ede-b6ae-ee7ea3f2ad3e" 2026-01-05 01:12:17.452206 | orchestrator |  } 2026-01-05 01:12:17.452216 | orchestrator |  }, 2026-01-05 01:12:17.452227 | orchestrator |  "lvm_volumes": [ 2026-01-05 01:12:17.452238 | orchestrator |  { 2026-01-05 01:12:17.452249 | orchestrator |  "data": "osd-block-cc420972-ce44-5a44-a5a6-a707e77471c5", 2026-01-05 01:12:17.452260 | orchestrator |  "data_vg": "ceph-cc420972-ce44-5a44-a5a6-a707e77471c5" 2026-01-05 01:12:17.452271 | orchestrator |  }, 2026-01-05 01:12:17.452282 | orchestrator |  { 2026-01-05 01:12:17.452293 | orchestrator |  "data": "osd-block-62cfaa39-e4fc-5ede-b6ae-ee7ea3f2ad3e", 2026-01-05 01:12:17.452303 | orchestrator |  "data_vg": "ceph-62cfaa39-e4fc-5ede-b6ae-ee7ea3f2ad3e" 2026-01-05 01:12:17.452313 | orchestrator |  } 2026-01-05 01:12:17.452323 | orchestrator |  ] 2026-01-05 01:12:17.452333 | orchestrator |  } 2026-01-05 01:12:17.452343 | orchestrator | } 2026-01-05 01:12:17.452352 | orchestrator | 2026-01-05 01:12:17.452362 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-05 01:12:17.452371 | orchestrator | Monday 05 January 2026 01:12:15 +0000 (0:00:00.229) 0:00:30.062 ******** 2026-01-05 01:12:17.452381 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-05 01:12:17.452391 | orchestrator | 2026-01-05 01:12:17.452400 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-05 01:12:17.452410 | orchestrator | 2026-01-05 01:12:17.452420 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-05 01:12:17.452429 | orchestrator | Monday 05 January 2026 01:12:16 +0000 (0:00:01.228) 0:00:31.291 ******** 2026-01-05 01:12:17.452440 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-05 01:12:17.452451 | orchestrator | 2026-01-05 01:12:17.452460 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-05 01:12:17.452479 | orchestrator | Monday 05 January 2026 01:12:16 +0000 (0:00:00.264) 0:00:31.555 ******** 2026-01-05 01:12:17.452490 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:12:17.452500 | orchestrator | 2026-01-05 01:12:17.452510 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:12:17.452520 | orchestrator | Monday 05 January 2026 01:12:17 +0000 (0:00:00.256) 0:00:31.811 ******** 2026-01-05 01:12:17.452529 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-01-05 01:12:17.452540 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-01-05 01:12:17.452549 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-01-05 01:12:17.452559 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-01-05 01:12:17.452569 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-01-05 01:12:17.452594 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-01-05 01:12:27.316866 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-01-05 01:12:27.317881 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-01-05 01:12:27.317913 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-01-05 01:12:27.317924 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-01-05 01:12:27.317934 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-01-05 01:12:27.317945 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-01-05 01:12:27.317954 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-01-05 01:12:27.317964 | orchestrator | 2026-01-05 01:12:27.317976 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:12:27.318012 | orchestrator | Monday 05 January 2026 01:12:17 +0000 (0:00:00.379) 0:00:32.191 ******** 2026-01-05 01:12:27.318061 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:12:27.318072 | orchestrator | 2026-01-05 01:12:27.318082 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:12:27.318092 | orchestrator | Monday 05 January 2026 01:12:17 +0000 (0:00:00.215) 0:00:32.406 ******** 2026-01-05 01:12:27.318101 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:12:27.318111 | orchestrator | 2026-01-05 01:12:27.318120 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:12:27.318130 | orchestrator | Monday 05 January 2026 01:12:18 +0000 (0:00:00.681) 0:00:33.088 ******** 2026-01-05 01:12:27.318139 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:12:27.318149 | orchestrator | 2026-01-05 01:12:27.318158 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:12:27.318168 | orchestrator | Monday 05 January 2026 01:12:18 +0000 (0:00:00.202) 0:00:33.290 ******** 2026-01-05 01:12:27.318177 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:12:27.318187 | orchestrator | 2026-01-05 01:12:27.318196 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:12:27.318205 | orchestrator | Monday 05 January 2026 01:12:18 +0000 (0:00:00.238) 0:00:33.529 ******** 2026-01-05 01:12:27.318214 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:12:27.318224 | orchestrator | 2026-01-05 01:12:27.318233 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:12:27.318242 | orchestrator | Monday 05 January 2026 01:12:18 +0000 (0:00:00.205) 0:00:33.735 ******** 2026-01-05 01:12:27.318252 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:12:27.318261 | orchestrator | 2026-01-05 01:12:27.318270 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:12:27.318279 | orchestrator | Monday 05 January 2026 01:12:19 +0000 (0:00:00.226) 0:00:33.961 ******** 2026-01-05 01:12:27.318289 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:12:27.318298 | orchestrator | 2026-01-05 01:12:27.318307 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:12:27.318316 | orchestrator | Monday 05 January 2026 01:12:19 +0000 (0:00:00.221) 0:00:34.182 ******** 2026-01-05 01:12:27.318326 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:12:27.318336 | orchestrator | 2026-01-05 01:12:27.318346 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:12:27.318356 | orchestrator | Monday 05 January 2026 01:12:19 +0000 (0:00:00.247) 0:00:34.430 ******** 2026-01-05 01:12:27.318365 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_994d72d0-f7fa-4ba3-8a27-05b8bd26fa8b) 2026-01-05 01:12:27.318376 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_994d72d0-f7fa-4ba3-8a27-05b8bd26fa8b) 2026-01-05 01:12:27.318385 | orchestrator | 2026-01-05 01:12:27.318395 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:12:27.318404 | orchestrator | Monday 05 January 2026 01:12:20 +0000 (0:00:00.466) 0:00:34.897 ******** 2026-01-05 01:12:27.318413 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_09f09123-b92e-4af4-8119-7d25e215193b) 2026-01-05 01:12:27.318423 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_09f09123-b92e-4af4-8119-7d25e215193b) 2026-01-05 01:12:27.318432 | orchestrator | 2026-01-05 01:12:27.318441 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:12:27.318451 | orchestrator | Monday 05 January 2026 01:12:20 +0000 (0:00:00.441) 0:00:35.338 ******** 2026-01-05 01:12:27.318460 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1d3cc069-e4cd-473c-8ec3-e2e615e111a0) 2026-01-05 01:12:27.318482 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1d3cc069-e4cd-473c-8ec3-e2e615e111a0) 2026-01-05 01:12:27.318491 | orchestrator | 2026-01-05 01:12:27.318501 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:12:27.318517 | orchestrator | Monday 05 January 2026 01:12:21 +0000 (0:00:00.460) 0:00:35.799 ******** 2026-01-05 01:12:27.318526 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_6f88ade1-67f9-419a-b69f-9c70a1e62aa2) 2026-01-05 01:12:27.318536 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_6f88ade1-67f9-419a-b69f-9c70a1e62aa2) 2026-01-05 01:12:27.318545 | orchestrator | 2026-01-05 01:12:27.318554 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:12:27.318563 | orchestrator | Monday 05 January 2026 01:12:21 +0000 (0:00:00.714) 0:00:36.513 ******** 2026-01-05 01:12:27.318572 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-05 01:12:27.318580 | orchestrator | 2026-01-05 01:12:27.318589 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:12:27.318619 | orchestrator | Monday 05 January 2026 01:12:22 +0000 (0:00:00.627) 0:00:37.141 ******** 2026-01-05 01:12:27.318629 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-01-05 01:12:27.318638 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-01-05 01:12:27.318648 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-01-05 01:12:27.318657 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-01-05 01:12:27.318666 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-01-05 01:12:27.318675 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-01-05 01:12:27.318684 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-01-05 01:12:27.318694 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-01-05 01:12:27.318703 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-01-05 01:12:27.318712 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-01-05 01:12:27.318722 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-01-05 01:12:27.318731 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-01-05 01:12:27.318740 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-01-05 01:12:27.318750 | orchestrator | 2026-01-05 01:12:27.318759 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:12:27.318768 | orchestrator | Monday 05 January 2026 01:12:23 +0000 (0:00:00.907) 0:00:38.049 ******** 2026-01-05 01:12:27.318778 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:12:27.318787 | orchestrator | 2026-01-05 01:12:27.318814 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:12:27.318824 | orchestrator | Monday 05 January 2026 01:12:23 +0000 (0:00:00.211) 0:00:38.260 ******** 2026-01-05 01:12:27.318833 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:12:27.318843 | orchestrator | 2026-01-05 01:12:27.318852 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:12:27.318862 | orchestrator | Monday 05 January 2026 01:12:23 +0000 (0:00:00.218) 0:00:38.478 ******** 2026-01-05 01:12:27.318871 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:12:27.318881 | orchestrator | 2026-01-05 01:12:27.318891 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:12:27.318900 | orchestrator | Monday 05 January 2026 01:12:23 +0000 (0:00:00.225) 0:00:38.703 ******** 2026-01-05 01:12:27.318909 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:12:27.318919 | orchestrator | 2026-01-05 01:12:27.318928 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:12:27.318945 | orchestrator | Monday 05 January 2026 01:12:24 +0000 (0:00:00.218) 0:00:38.922 ******** 2026-01-05 01:12:27.318954 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:12:27.318963 | orchestrator | 2026-01-05 01:12:27.318972 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:12:27.318981 | orchestrator | Monday 05 January 2026 01:12:24 +0000 (0:00:00.235) 0:00:39.158 ******** 2026-01-05 01:12:27.318991 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:12:27.319000 | orchestrator | 2026-01-05 01:12:27.319052 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:12:27.319062 | orchestrator | Monday 05 January 2026 01:12:24 +0000 (0:00:00.228) 0:00:39.386 ******** 2026-01-05 01:12:27.319071 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:12:27.319081 | orchestrator | 2026-01-05 01:12:27.319090 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:12:27.319100 | orchestrator | Monday 05 January 2026 01:12:24 +0000 (0:00:00.208) 0:00:39.594 ******** 2026-01-05 01:12:27.319109 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:12:27.319118 | orchestrator | 2026-01-05 01:12:27.319127 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:12:27.319137 | orchestrator | Monday 05 January 2026 01:12:25 +0000 (0:00:00.194) 0:00:39.789 ******** 2026-01-05 01:12:27.319147 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-01-05 01:12:27.319156 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-01-05 01:12:27.319166 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-01-05 01:12:27.319175 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-01-05 01:12:27.319185 | orchestrator | 2026-01-05 01:12:27.319200 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:12:27.319210 | orchestrator | Monday 05 January 2026 01:12:25 +0000 (0:00:00.908) 0:00:40.698 ******** 2026-01-05 01:12:27.319219 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:12:27.319229 | orchestrator | 2026-01-05 01:12:27.319238 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:12:27.319248 | orchestrator | Monday 05 January 2026 01:12:26 +0000 (0:00:00.654) 0:00:41.352 ******** 2026-01-05 01:12:27.319257 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:12:27.319294 | orchestrator | 2026-01-05 01:12:27.319303 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:12:27.319313 | orchestrator | Monday 05 January 2026 01:12:26 +0000 (0:00:00.224) 0:00:41.577 ******** 2026-01-05 01:12:27.319322 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:12:27.319332 | orchestrator | 2026-01-05 01:12:27.319341 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:12:27.319351 | orchestrator | Monday 05 January 2026 01:12:27 +0000 (0:00:00.220) 0:00:41.797 ******** 2026-01-05 01:12:27.319360 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:12:27.319370 | orchestrator | 2026-01-05 01:12:27.319385 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-05 01:12:31.921202 | orchestrator | Monday 05 January 2026 01:12:27 +0000 (0:00:00.265) 0:00:42.063 ******** 2026-01-05 01:12:31.921316 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-01-05 01:12:31.921331 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-01-05 01:12:31.921342 | orchestrator | 2026-01-05 01:12:31.921352 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-05 01:12:31.921363 | orchestrator | Monday 05 January 2026 01:12:27 +0000 (0:00:00.189) 0:00:42.253 ******** 2026-01-05 01:12:31.921373 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:12:31.921382 | orchestrator | 2026-01-05 01:12:31.921389 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-05 01:12:31.921395 | orchestrator | Monday 05 January 2026 01:12:27 +0000 (0:00:00.145) 0:00:42.399 ******** 2026-01-05 01:12:31.921401 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:12:31.921406 | orchestrator | 2026-01-05 01:12:31.921412 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-05 01:12:31.921440 | orchestrator | Monday 05 January 2026 01:12:27 +0000 (0:00:00.142) 0:00:42.541 ******** 2026-01-05 01:12:31.921446 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:12:31.921451 | orchestrator | 2026-01-05 01:12:31.921457 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-05 01:12:31.921463 | orchestrator | Monday 05 January 2026 01:12:27 +0000 (0:00:00.154) 0:00:42.696 ******** 2026-01-05 01:12:31.921468 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:12:31.921475 | orchestrator | 2026-01-05 01:12:31.921480 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-05 01:12:31.921486 | orchestrator | Monday 05 January 2026 01:12:28 +0000 (0:00:00.155) 0:00:42.851 ******** 2026-01-05 01:12:31.921492 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '13a82a55-1430-5b0a-a1a4-baa9d6ca4414'}}) 2026-01-05 01:12:31.921498 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '124df3d1-788c-586c-b42c-9b6f84a90775'}}) 2026-01-05 01:12:31.921504 | orchestrator | 2026-01-05 01:12:31.921509 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-05 01:12:31.921515 | orchestrator | Monday 05 January 2026 01:12:28 +0000 (0:00:00.166) 0:00:43.017 ******** 2026-01-05 01:12:31.921521 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '13a82a55-1430-5b0a-a1a4-baa9d6ca4414'}})  2026-01-05 01:12:31.921529 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '124df3d1-788c-586c-b42c-9b6f84a90775'}})  2026-01-05 01:12:31.921535 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:12:31.921540 | orchestrator | 2026-01-05 01:12:31.921546 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-05 01:12:31.921551 | orchestrator | Monday 05 January 2026 01:12:28 +0000 (0:00:00.175) 0:00:43.193 ******** 2026-01-05 01:12:31.921557 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '13a82a55-1430-5b0a-a1a4-baa9d6ca4414'}})  2026-01-05 01:12:31.921562 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '124df3d1-788c-586c-b42c-9b6f84a90775'}})  2026-01-05 01:12:31.921568 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:12:31.921573 | orchestrator | 2026-01-05 01:12:31.921578 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-05 01:12:31.921584 | orchestrator | Monday 05 January 2026 01:12:28 +0000 (0:00:00.161) 0:00:43.354 ******** 2026-01-05 01:12:31.921590 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '13a82a55-1430-5b0a-a1a4-baa9d6ca4414'}})  2026-01-05 01:12:31.921595 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '124df3d1-788c-586c-b42c-9b6f84a90775'}})  2026-01-05 01:12:31.921601 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:12:31.921606 | orchestrator | 2026-01-05 01:12:31.921612 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-05 01:12:31.921617 | orchestrator | Monday 05 January 2026 01:12:28 +0000 (0:00:00.161) 0:00:43.515 ******** 2026-01-05 01:12:31.921622 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:12:31.921628 | orchestrator | 2026-01-05 01:12:31.921633 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-05 01:12:31.921639 | orchestrator | Monday 05 January 2026 01:12:29 +0000 (0:00:00.393) 0:00:43.909 ******** 2026-01-05 01:12:31.921644 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:12:31.921650 | orchestrator | 2026-01-05 01:12:31.921669 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-05 01:12:31.921674 | orchestrator | Monday 05 January 2026 01:12:29 +0000 (0:00:00.149) 0:00:44.058 ******** 2026-01-05 01:12:31.921680 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:12:31.921685 | orchestrator | 2026-01-05 01:12:31.921691 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-05 01:12:31.921702 | orchestrator | Monday 05 January 2026 01:12:29 +0000 (0:00:00.148) 0:00:44.206 ******** 2026-01-05 01:12:31.921707 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:12:31.921713 | orchestrator | 2026-01-05 01:12:31.921718 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-05 01:12:31.921724 | orchestrator | Monday 05 January 2026 01:12:29 +0000 (0:00:00.150) 0:00:44.357 ******** 2026-01-05 01:12:31.921729 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:12:31.921736 | orchestrator | 2026-01-05 01:12:31.921742 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-05 01:12:31.921748 | orchestrator | Monday 05 January 2026 01:12:29 +0000 (0:00:00.146) 0:00:44.503 ******** 2026-01-05 01:12:31.921757 | orchestrator | ok: [testbed-node-5] => { 2026-01-05 01:12:31.921765 | orchestrator |  "ceph_osd_devices": { 2026-01-05 01:12:31.921778 | orchestrator |  "sdb": { 2026-01-05 01:12:31.921830 | orchestrator |  "osd_lvm_uuid": "13a82a55-1430-5b0a-a1a4-baa9d6ca4414" 2026-01-05 01:12:31.921841 | orchestrator |  }, 2026-01-05 01:12:31.921849 | orchestrator |  "sdc": { 2026-01-05 01:12:31.921858 | orchestrator |  "osd_lvm_uuid": "124df3d1-788c-586c-b42c-9b6f84a90775" 2026-01-05 01:12:31.921867 | orchestrator |  } 2026-01-05 01:12:31.921876 | orchestrator |  } 2026-01-05 01:12:31.921884 | orchestrator | } 2026-01-05 01:12:31.921892 | orchestrator | 2026-01-05 01:12:31.921900 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-05 01:12:31.921909 | orchestrator | Monday 05 January 2026 01:12:29 +0000 (0:00:00.150) 0:00:44.654 ******** 2026-01-05 01:12:31.921917 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:12:31.921926 | orchestrator | 2026-01-05 01:12:31.921934 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-05 01:12:31.921942 | orchestrator | Monday 05 January 2026 01:12:30 +0000 (0:00:00.151) 0:00:44.806 ******** 2026-01-05 01:12:31.921951 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:12:31.921960 | orchestrator | 2026-01-05 01:12:31.921969 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-05 01:12:31.921978 | orchestrator | Monday 05 January 2026 01:12:30 +0000 (0:00:00.148) 0:00:44.954 ******** 2026-01-05 01:12:31.921987 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:12:31.921995 | orchestrator | 2026-01-05 01:12:31.922004 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-05 01:12:31.922173 | orchestrator | Monday 05 January 2026 01:12:30 +0000 (0:00:00.148) 0:00:45.103 ******** 2026-01-05 01:12:31.922188 | orchestrator | changed: [testbed-node-5] => { 2026-01-05 01:12:31.922194 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-05 01:12:31.922199 | orchestrator |  "ceph_osd_devices": { 2026-01-05 01:12:31.922205 | orchestrator |  "sdb": { 2026-01-05 01:12:31.922211 | orchestrator |  "osd_lvm_uuid": "13a82a55-1430-5b0a-a1a4-baa9d6ca4414" 2026-01-05 01:12:31.922217 | orchestrator |  }, 2026-01-05 01:12:31.922226 | orchestrator |  "sdc": { 2026-01-05 01:12:31.922235 | orchestrator |  "osd_lvm_uuid": "124df3d1-788c-586c-b42c-9b6f84a90775" 2026-01-05 01:12:31.922243 | orchestrator |  } 2026-01-05 01:12:31.922256 | orchestrator |  }, 2026-01-05 01:12:31.922266 | orchestrator |  "lvm_volumes": [ 2026-01-05 01:12:31.922274 | orchestrator |  { 2026-01-05 01:12:31.922282 | orchestrator |  "data": "osd-block-13a82a55-1430-5b0a-a1a4-baa9d6ca4414", 2026-01-05 01:12:31.922291 | orchestrator |  "data_vg": "ceph-13a82a55-1430-5b0a-a1a4-baa9d6ca4414" 2026-01-05 01:12:31.922300 | orchestrator |  }, 2026-01-05 01:12:31.922309 | orchestrator |  { 2026-01-05 01:12:31.922318 | orchestrator |  "data": "osd-block-124df3d1-788c-586c-b42c-9b6f84a90775", 2026-01-05 01:12:31.922328 | orchestrator |  "data_vg": "ceph-124df3d1-788c-586c-b42c-9b6f84a90775" 2026-01-05 01:12:31.922338 | orchestrator |  } 2026-01-05 01:12:31.922346 | orchestrator |  ] 2026-01-05 01:12:31.922369 | orchestrator |  } 2026-01-05 01:12:31.922375 | orchestrator | } 2026-01-05 01:12:31.922380 | orchestrator | 2026-01-05 01:12:31.922386 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-05 01:12:31.922391 | orchestrator | Monday 05 January 2026 01:12:30 +0000 (0:00:00.233) 0:00:45.336 ******** 2026-01-05 01:12:31.922397 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-05 01:12:31.922402 | orchestrator | 2026-01-05 01:12:31.922408 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:12:31.922414 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-05 01:12:31.922422 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-05 01:12:31.922427 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-05 01:12:31.922433 | orchestrator | 2026-01-05 01:12:31.922438 | orchestrator | 2026-01-05 01:12:31.922444 | orchestrator | 2026-01-05 01:12:31.922449 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:12:31.922454 | orchestrator | Monday 05 January 2026 01:12:31 +0000 (0:00:01.309) 0:00:46.646 ******** 2026-01-05 01:12:31.922460 | orchestrator | =============================================================================== 2026-01-05 01:12:31.922465 | orchestrator | Write configuration file ------------------------------------------------ 4.40s 2026-01-05 01:12:31.922470 | orchestrator | Add known partitions to the list of available block devices ------------- 1.80s 2026-01-05 01:12:31.922482 | orchestrator | Add known links to the list of available block devices ------------------ 1.78s 2026-01-05 01:12:31.922488 | orchestrator | Add known links to the list of available block devices ------------------ 0.95s 2026-01-05 01:12:31.922493 | orchestrator | Add known links to the list of available block devices ------------------ 0.94s 2026-01-05 01:12:31.922498 | orchestrator | Add known partitions to the list of available block devices ------------- 0.91s 2026-01-05 01:12:31.922504 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.82s 2026-01-05 01:12:31.922509 | orchestrator | Generate lvm_volumes structure (block only) ----------------------------- 0.78s 2026-01-05 01:12:31.922514 | orchestrator | Get initial list of available block devices ----------------------------- 0.76s 2026-01-05 01:12:31.922520 | orchestrator | Add known links to the list of available block devices ------------------ 0.72s 2026-01-05 01:12:31.922525 | orchestrator | Add known links to the list of available block devices ------------------ 0.71s 2026-01-05 01:12:31.922530 | orchestrator | Print configuration data ------------------------------------------------ 0.70s 2026-01-05 01:12:31.922536 | orchestrator | Add known partitions to the list of available block devices ------------- 0.70s 2026-01-05 01:12:31.922553 | orchestrator | Add known partitions to the list of available block devices ------------- 0.70s 2026-01-05 01:12:32.418721 | orchestrator | Compile lvm_volumes ----------------------------------------------------- 0.69s 2026-01-05 01:12:32.418870 | orchestrator | Add known links to the list of available block devices ------------------ 0.69s 2026-01-05 01:12:32.418884 | orchestrator | Add known links to the list of available block devices ------------------ 0.68s 2026-01-05 01:12:32.418893 | orchestrator | Add known links to the list of available block devices ------------------ 0.68s 2026-01-05 01:12:32.418901 | orchestrator | Add known partitions to the list of available block devices ------------- 0.68s 2026-01-05 01:12:32.418910 | orchestrator | Print ceph_osd_devices -------------------------------------------------- 0.67s 2026-01-05 01:12:55.184270 | orchestrator | 2026-01-05 01:12:55 | INFO  | Task e34722c3-9cd7-4a1b-8c9b-0290e61ee586 (sync inventory) is running in background. Output coming soon. 2026-01-05 01:13:24.727442 | orchestrator | 2026-01-05 01:12:56 | INFO  | Starting group_vars file reorganization 2026-01-05 01:13:24.727582 | orchestrator | 2026-01-05 01:12:56 | INFO  | Moved 0 file(s) to their respective directories 2026-01-05 01:13:24.727598 | orchestrator | 2026-01-05 01:12:56 | INFO  | Group_vars file reorganization completed 2026-01-05 01:13:24.727609 | orchestrator | 2026-01-05 01:12:59 | INFO  | Starting variable preparation from inventory 2026-01-05 01:13:24.727619 | orchestrator | 2026-01-05 01:13:02 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-01-05 01:13:24.727629 | orchestrator | 2026-01-05 01:13:02 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-01-05 01:13:24.727639 | orchestrator | 2026-01-05 01:13:02 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-01-05 01:13:24.727650 | orchestrator | 2026-01-05 01:13:02 | INFO  | 3 file(s) written, 6 host(s) processed 2026-01-05 01:13:24.727660 | orchestrator | 2026-01-05 01:13:02 | INFO  | Variable preparation completed 2026-01-05 01:13:24.727670 | orchestrator | 2026-01-05 01:13:04 | INFO  | Starting inventory overwrite handling 2026-01-05 01:13:24.727679 | orchestrator | 2026-01-05 01:13:04 | INFO  | Handling group overwrites in 99-overwrite 2026-01-05 01:13:24.727689 | orchestrator | 2026-01-05 01:13:04 | INFO  | Removing group frr:children from 60-generic 2026-01-05 01:13:24.727698 | orchestrator | 2026-01-05 01:13:04 | INFO  | Removing group netbird:children from 50-infrastructure 2026-01-05 01:13:24.727708 | orchestrator | 2026-01-05 01:13:04 | INFO  | Removing group ceph-rgw from 50-ceph 2026-01-05 01:13:24.727718 | orchestrator | 2026-01-05 01:13:04 | INFO  | Removing group ceph-mds from 50-ceph 2026-01-05 01:13:24.727924 | orchestrator | 2026-01-05 01:13:04 | INFO  | Handling group overwrites in 20-roles 2026-01-05 01:13:24.727940 | orchestrator | 2026-01-05 01:13:04 | INFO  | Removing group k3s_node from 50-infrastructure 2026-01-05 01:13:24.727951 | orchestrator | 2026-01-05 01:13:04 | INFO  | Removed 5 group(s) in total 2026-01-05 01:13:24.727960 | orchestrator | 2026-01-05 01:13:04 | INFO  | Inventory overwrite handling completed 2026-01-05 01:13:24.727970 | orchestrator | 2026-01-05 01:13:05 | INFO  | Starting merge of inventory files 2026-01-05 01:13:24.727981 | orchestrator | 2026-01-05 01:13:05 | INFO  | Inventory files merged successfully 2026-01-05 01:13:24.727992 | orchestrator | 2026-01-05 01:13:11 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-01-05 01:13:24.728004 | orchestrator | 2026-01-05 01:13:23 | INFO  | Successfully wrote ClusterShell configuration 2026-01-05 01:13:24.728026 | orchestrator | [master 842c0cc] 2026-01-05-01-13 2026-01-05 01:13:24.728040 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-01-05 01:13:27.116568 | orchestrator | 2026-01-05 01:13:27 | INFO  | Task 69a23883-966e-4cd8-ab00-46854214b378 (ceph-create-lvm-devices) was prepared for execution. 2026-01-05 01:13:27.116662 | orchestrator | 2026-01-05 01:13:27 | INFO  | It takes a moment until task 69a23883-966e-4cd8-ab00-46854214b378 (ceph-create-lvm-devices) has been started and output is visible here. 2026-01-05 01:13:40.344694 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-05 01:13:40.345553 | orchestrator | 2.16.14 2026-01-05 01:13:40.345592 | orchestrator | 2026-01-05 01:13:40.345603 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-05 01:13:40.345613 | orchestrator | 2026-01-05 01:13:40.345622 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-05 01:13:40.345631 | orchestrator | Monday 05 January 2026 01:13:31 +0000 (0:00:00.319) 0:00:00.319 ******** 2026-01-05 01:13:40.345640 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-05 01:13:40.345648 | orchestrator | 2026-01-05 01:13:40.345678 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-05 01:13:40.345687 | orchestrator | Monday 05 January 2026 01:13:31 +0000 (0:00:00.265) 0:00:00.585 ******** 2026-01-05 01:13:40.345695 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:13:40.345702 | orchestrator | 2026-01-05 01:13:40.345771 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:13:40.345779 | orchestrator | Monday 05 January 2026 01:13:32 +0000 (0:00:00.310) 0:00:00.895 ******** 2026-01-05 01:13:40.345786 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-01-05 01:13:40.345793 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-01-05 01:13:40.345800 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-01-05 01:13:40.345805 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-01-05 01:13:40.345812 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-01-05 01:13:40.345818 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-01-05 01:13:40.345825 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-01-05 01:13:40.345832 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-01-05 01:13:40.345840 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-01-05 01:13:40.345847 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-01-05 01:13:40.345853 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-01-05 01:13:40.345860 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-01-05 01:13:40.345867 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-01-05 01:13:40.345873 | orchestrator | 2026-01-05 01:13:40.345880 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:13:40.345887 | orchestrator | Monday 05 January 2026 01:13:32 +0000 (0:00:00.569) 0:00:01.465 ******** 2026-01-05 01:13:40.345894 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:13:40.345901 | orchestrator | 2026-01-05 01:13:40.345909 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:13:40.345915 | orchestrator | Monday 05 January 2026 01:13:32 +0000 (0:00:00.216) 0:00:01.681 ******** 2026-01-05 01:13:40.345922 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:13:40.345927 | orchestrator | 2026-01-05 01:13:40.345933 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:13:40.345939 | orchestrator | Monday 05 January 2026 01:13:33 +0000 (0:00:00.230) 0:00:01.912 ******** 2026-01-05 01:13:40.345944 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:13:40.345951 | orchestrator | 2026-01-05 01:13:40.345958 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:13:40.345965 | orchestrator | Monday 05 January 2026 01:13:33 +0000 (0:00:00.226) 0:00:02.139 ******** 2026-01-05 01:13:40.345972 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:13:40.345978 | orchestrator | 2026-01-05 01:13:40.345986 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:13:40.345993 | orchestrator | Monday 05 January 2026 01:13:33 +0000 (0:00:00.252) 0:00:02.391 ******** 2026-01-05 01:13:40.346000 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:13:40.346007 | orchestrator | 2026-01-05 01:13:40.346062 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:13:40.346071 | orchestrator | Monday 05 January 2026 01:13:33 +0000 (0:00:00.226) 0:00:02.618 ******** 2026-01-05 01:13:40.346078 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:13:40.346085 | orchestrator | 2026-01-05 01:13:40.346092 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:13:40.346108 | orchestrator | Monday 05 January 2026 01:13:34 +0000 (0:00:00.229) 0:00:02.848 ******** 2026-01-05 01:13:40.346116 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:13:40.346123 | orchestrator | 2026-01-05 01:13:40.346129 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:13:40.346136 | orchestrator | Monday 05 January 2026 01:13:34 +0000 (0:00:00.220) 0:00:03.068 ******** 2026-01-05 01:13:40.346143 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:13:40.346149 | orchestrator | 2026-01-05 01:13:40.346155 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:13:40.346161 | orchestrator | Monday 05 January 2026 01:13:34 +0000 (0:00:00.207) 0:00:03.275 ******** 2026-01-05 01:13:40.346168 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_0b0d1c85-8aad-4201-aadd-214ecf9ccf0b) 2026-01-05 01:13:40.346191 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_0b0d1c85-8aad-4201-aadd-214ecf9ccf0b) 2026-01-05 01:13:40.346198 | orchestrator | 2026-01-05 01:13:40.346206 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:13:40.346235 | orchestrator | Monday 05 January 2026 01:13:35 +0000 (0:00:00.705) 0:00:03.981 ******** 2026-01-05 01:13:40.346243 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_bcde85c0-b124-4268-b34b-cc4a07cfe72d) 2026-01-05 01:13:40.346250 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_bcde85c0-b124-4268-b34b-cc4a07cfe72d) 2026-01-05 01:13:40.346257 | orchestrator | 2026-01-05 01:13:40.346264 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:13:40.346271 | orchestrator | Monday 05 January 2026 01:13:35 +0000 (0:00:00.706) 0:00:04.688 ******** 2026-01-05 01:13:40.346278 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_99050707-7ba3-43f8-b640-7ac26fbd844b) 2026-01-05 01:13:40.346285 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_99050707-7ba3-43f8-b640-7ac26fbd844b) 2026-01-05 01:13:40.346292 | orchestrator | 2026-01-05 01:13:40.346299 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:13:40.346306 | orchestrator | Monday 05 January 2026 01:13:36 +0000 (0:00:00.973) 0:00:05.661 ******** 2026-01-05 01:13:40.346312 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ca851d29-aa00-48c4-a2d0-a646814f4a41) 2026-01-05 01:13:40.346319 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ca851d29-aa00-48c4-a2d0-a646814f4a41) 2026-01-05 01:13:40.346326 | orchestrator | 2026-01-05 01:13:40.346333 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:13:40.346340 | orchestrator | Monday 05 January 2026 01:13:37 +0000 (0:00:00.480) 0:00:06.142 ******** 2026-01-05 01:13:40.346348 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-05 01:13:40.346355 | orchestrator | 2026-01-05 01:13:40.346362 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:13:40.346369 | orchestrator | Monday 05 January 2026 01:13:37 +0000 (0:00:00.385) 0:00:06.528 ******** 2026-01-05 01:13:40.346376 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-01-05 01:13:40.346383 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-01-05 01:13:40.346391 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-01-05 01:13:40.346398 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-01-05 01:13:40.346405 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-01-05 01:13:40.346411 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-01-05 01:13:40.346418 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-01-05 01:13:40.346431 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-01-05 01:13:40.346438 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-01-05 01:13:40.346445 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-01-05 01:13:40.346452 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-01-05 01:13:40.346459 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-01-05 01:13:40.346466 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-01-05 01:13:40.346473 | orchestrator | 2026-01-05 01:13:40.346480 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:13:40.346487 | orchestrator | Monday 05 January 2026 01:13:38 +0000 (0:00:00.466) 0:00:06.994 ******** 2026-01-05 01:13:40.346494 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:13:40.346501 | orchestrator | 2026-01-05 01:13:40.346508 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:13:40.346515 | orchestrator | Monday 05 January 2026 01:13:38 +0000 (0:00:00.231) 0:00:07.225 ******** 2026-01-05 01:13:40.346523 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:13:40.346529 | orchestrator | 2026-01-05 01:13:40.346535 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:13:40.346541 | orchestrator | Monday 05 January 2026 01:13:38 +0000 (0:00:00.231) 0:00:07.457 ******** 2026-01-05 01:13:40.346547 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:13:40.346554 | orchestrator | 2026-01-05 01:13:40.346561 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:13:40.346568 | orchestrator | Monday 05 January 2026 01:13:38 +0000 (0:00:00.219) 0:00:07.676 ******** 2026-01-05 01:13:40.346575 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:13:40.346583 | orchestrator | 2026-01-05 01:13:40.346590 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:13:40.346597 | orchestrator | Monday 05 January 2026 01:13:39 +0000 (0:00:00.240) 0:00:07.917 ******** 2026-01-05 01:13:40.346604 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:13:40.346611 | orchestrator | 2026-01-05 01:13:40.346618 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:13:40.346625 | orchestrator | Monday 05 January 2026 01:13:39 +0000 (0:00:00.229) 0:00:08.147 ******** 2026-01-05 01:13:40.346632 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:13:40.346639 | orchestrator | 2026-01-05 01:13:40.346651 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:13:40.346659 | orchestrator | Monday 05 January 2026 01:13:40 +0000 (0:00:00.651) 0:00:08.799 ******** 2026-01-05 01:13:40.346666 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:13:40.346672 | orchestrator | 2026-01-05 01:13:40.346685 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:13:48.931268 | orchestrator | Monday 05 January 2026 01:13:40 +0000 (0:00:00.241) 0:00:09.040 ******** 2026-01-05 01:13:48.931371 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:13:48.931381 | orchestrator | 2026-01-05 01:13:48.931388 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:13:48.931394 | orchestrator | Monday 05 January 2026 01:13:40 +0000 (0:00:00.211) 0:00:09.252 ******** 2026-01-05 01:13:48.931400 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-01-05 01:13:48.931406 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-01-05 01:13:48.931413 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-01-05 01:13:48.931419 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-01-05 01:13:48.931424 | orchestrator | 2026-01-05 01:13:48.931430 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:13:48.931436 | orchestrator | Monday 05 January 2026 01:13:41 +0000 (0:00:00.727) 0:00:09.980 ******** 2026-01-05 01:13:48.931461 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:13:48.931466 | orchestrator | 2026-01-05 01:13:48.931472 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:13:48.931478 | orchestrator | Monday 05 January 2026 01:13:41 +0000 (0:00:00.242) 0:00:10.223 ******** 2026-01-05 01:13:48.931483 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:13:48.931488 | orchestrator | 2026-01-05 01:13:48.931494 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:13:48.931500 | orchestrator | Monday 05 January 2026 01:13:41 +0000 (0:00:00.227) 0:00:10.450 ******** 2026-01-05 01:13:48.931506 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:13:48.931511 | orchestrator | 2026-01-05 01:13:48.931516 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:13:48.931522 | orchestrator | Monday 05 January 2026 01:13:41 +0000 (0:00:00.227) 0:00:10.677 ******** 2026-01-05 01:13:48.931527 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:13:48.931533 | orchestrator | 2026-01-05 01:13:48.931538 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-05 01:13:48.931543 | orchestrator | Monday 05 January 2026 01:13:42 +0000 (0:00:00.216) 0:00:10.894 ******** 2026-01-05 01:13:48.931548 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:13:48.931554 | orchestrator | 2026-01-05 01:13:48.931559 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-05 01:13:48.931564 | orchestrator | Monday 05 January 2026 01:13:42 +0000 (0:00:00.143) 0:00:11.037 ******** 2026-01-05 01:13:48.931570 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9b63b326-8bb9-546b-aabb-a628fef076ec'}}) 2026-01-05 01:13:48.931576 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b6ae7fca-c2f2-5e20-af6f-426bd4b4cc4c'}}) 2026-01-05 01:13:48.931582 | orchestrator | 2026-01-05 01:13:48.931587 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-05 01:13:48.931592 | orchestrator | Monday 05 January 2026 01:13:42 +0000 (0:00:00.201) 0:00:11.239 ******** 2026-01-05 01:13:48.931600 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-9b63b326-8bb9-546b-aabb-a628fef076ec', 'data_vg': 'ceph-9b63b326-8bb9-546b-aabb-a628fef076ec'}) 2026-01-05 01:13:48.931607 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-b6ae7fca-c2f2-5e20-af6f-426bd4b4cc4c', 'data_vg': 'ceph-b6ae7fca-c2f2-5e20-af6f-426bd4b4cc4c'}) 2026-01-05 01:13:48.931612 | orchestrator | 2026-01-05 01:13:48.931618 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-05 01:13:48.931623 | orchestrator | Monday 05 January 2026 01:13:44 +0000 (0:00:02.104) 0:00:13.343 ******** 2026-01-05 01:13:48.931628 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b63b326-8bb9-546b-aabb-a628fef076ec', 'data_vg': 'ceph-9b63b326-8bb9-546b-aabb-a628fef076ec'})  2026-01-05 01:13:48.931635 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b6ae7fca-c2f2-5e20-af6f-426bd4b4cc4c', 'data_vg': 'ceph-b6ae7fca-c2f2-5e20-af6f-426bd4b4cc4c'})  2026-01-05 01:13:48.931640 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:13:48.931646 | orchestrator | 2026-01-05 01:13:48.931651 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-05 01:13:48.931657 | orchestrator | Monday 05 January 2026 01:13:45 +0000 (0:00:00.385) 0:00:13.729 ******** 2026-01-05 01:13:48.931662 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-9b63b326-8bb9-546b-aabb-a628fef076ec', 'data_vg': 'ceph-9b63b326-8bb9-546b-aabb-a628fef076ec'}) 2026-01-05 01:13:48.931667 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-b6ae7fca-c2f2-5e20-af6f-426bd4b4cc4c', 'data_vg': 'ceph-b6ae7fca-c2f2-5e20-af6f-426bd4b4cc4c'}) 2026-01-05 01:13:48.931673 | orchestrator | 2026-01-05 01:13:48.931678 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-05 01:13:48.931683 | orchestrator | Monday 05 January 2026 01:13:46 +0000 (0:00:01.649) 0:00:15.379 ******** 2026-01-05 01:13:48.931764 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b63b326-8bb9-546b-aabb-a628fef076ec', 'data_vg': 'ceph-9b63b326-8bb9-546b-aabb-a628fef076ec'})  2026-01-05 01:13:48.931790 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b6ae7fca-c2f2-5e20-af6f-426bd4b4cc4c', 'data_vg': 'ceph-b6ae7fca-c2f2-5e20-af6f-426bd4b4cc4c'})  2026-01-05 01:13:48.931804 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:13:48.931813 | orchestrator | 2026-01-05 01:13:48.931822 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-05 01:13:48.931830 | orchestrator | Monday 05 January 2026 01:13:46 +0000 (0:00:00.165) 0:00:15.544 ******** 2026-01-05 01:13:48.931856 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:13:48.931866 | orchestrator | 2026-01-05 01:13:48.931875 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-05 01:13:48.931884 | orchestrator | Monday 05 January 2026 01:13:47 +0000 (0:00:00.195) 0:00:15.740 ******** 2026-01-05 01:13:48.931892 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b63b326-8bb9-546b-aabb-a628fef076ec', 'data_vg': 'ceph-9b63b326-8bb9-546b-aabb-a628fef076ec'})  2026-01-05 01:13:48.931900 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b6ae7fca-c2f2-5e20-af6f-426bd4b4cc4c', 'data_vg': 'ceph-b6ae7fca-c2f2-5e20-af6f-426bd4b4cc4c'})  2026-01-05 01:13:48.931906 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:13:48.931912 | orchestrator | 2026-01-05 01:13:48.931918 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-05 01:13:48.931925 | orchestrator | Monday 05 January 2026 01:13:47 +0000 (0:00:00.175) 0:00:15.915 ******** 2026-01-05 01:13:48.931931 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:13:48.931938 | orchestrator | 2026-01-05 01:13:48.931944 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-05 01:13:48.931951 | orchestrator | Monday 05 January 2026 01:13:47 +0000 (0:00:00.153) 0:00:16.069 ******** 2026-01-05 01:13:48.931957 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b63b326-8bb9-546b-aabb-a628fef076ec', 'data_vg': 'ceph-9b63b326-8bb9-546b-aabb-a628fef076ec'})  2026-01-05 01:13:48.931963 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b6ae7fca-c2f2-5e20-af6f-426bd4b4cc4c', 'data_vg': 'ceph-b6ae7fca-c2f2-5e20-af6f-426bd4b4cc4c'})  2026-01-05 01:13:48.931970 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:13:48.931976 | orchestrator | 2026-01-05 01:13:48.931982 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-05 01:13:48.931988 | orchestrator | Monday 05 January 2026 01:13:47 +0000 (0:00:00.168) 0:00:16.238 ******** 2026-01-05 01:13:48.931995 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:13:48.932001 | orchestrator | 2026-01-05 01:13:48.932007 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-05 01:13:48.932015 | orchestrator | Monday 05 January 2026 01:13:47 +0000 (0:00:00.154) 0:00:16.393 ******** 2026-01-05 01:13:48.932021 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b63b326-8bb9-546b-aabb-a628fef076ec', 'data_vg': 'ceph-9b63b326-8bb9-546b-aabb-a628fef076ec'})  2026-01-05 01:13:48.932027 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b6ae7fca-c2f2-5e20-af6f-426bd4b4cc4c', 'data_vg': 'ceph-b6ae7fca-c2f2-5e20-af6f-426bd4b4cc4c'})  2026-01-05 01:13:48.932036 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:13:48.932045 | orchestrator | 2026-01-05 01:13:48.932058 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-05 01:13:48.932068 | orchestrator | Monday 05 January 2026 01:13:47 +0000 (0:00:00.180) 0:00:16.574 ******** 2026-01-05 01:13:48.932077 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:13:48.932086 | orchestrator | 2026-01-05 01:13:48.932095 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-05 01:13:48.932104 | orchestrator | Monday 05 January 2026 01:13:48 +0000 (0:00:00.165) 0:00:16.739 ******** 2026-01-05 01:13:48.932121 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b63b326-8bb9-546b-aabb-a628fef076ec', 'data_vg': 'ceph-9b63b326-8bb9-546b-aabb-a628fef076ec'})  2026-01-05 01:13:48.932129 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b6ae7fca-c2f2-5e20-af6f-426bd4b4cc4c', 'data_vg': 'ceph-b6ae7fca-c2f2-5e20-af6f-426bd4b4cc4c'})  2026-01-05 01:13:48.932139 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:13:48.932148 | orchestrator | 2026-01-05 01:13:48.932156 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-05 01:13:48.932164 | orchestrator | Monday 05 January 2026 01:13:48 +0000 (0:00:00.170) 0:00:16.910 ******** 2026-01-05 01:13:48.932173 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b63b326-8bb9-546b-aabb-a628fef076ec', 'data_vg': 'ceph-9b63b326-8bb9-546b-aabb-a628fef076ec'})  2026-01-05 01:13:48.932182 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b6ae7fca-c2f2-5e20-af6f-426bd4b4cc4c', 'data_vg': 'ceph-b6ae7fca-c2f2-5e20-af6f-426bd4b4cc4c'})  2026-01-05 01:13:48.932191 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:13:48.932200 | orchestrator | 2026-01-05 01:13:48.932209 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-05 01:13:48.932218 | orchestrator | Monday 05 January 2026 01:13:48 +0000 (0:00:00.393) 0:00:17.304 ******** 2026-01-05 01:13:48.932227 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b63b326-8bb9-546b-aabb-a628fef076ec', 'data_vg': 'ceph-9b63b326-8bb9-546b-aabb-a628fef076ec'})  2026-01-05 01:13:48.932233 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b6ae7fca-c2f2-5e20-af6f-426bd4b4cc4c', 'data_vg': 'ceph-b6ae7fca-c2f2-5e20-af6f-426bd4b4cc4c'})  2026-01-05 01:13:48.932238 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:13:48.932244 | orchestrator | 2026-01-05 01:13:48.932255 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-05 01:13:48.932261 | orchestrator | Monday 05 January 2026 01:13:48 +0000 (0:00:00.161) 0:00:17.465 ******** 2026-01-05 01:13:48.932266 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:13:48.932272 | orchestrator | 2026-01-05 01:13:48.932283 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-05 01:13:48.932299 | orchestrator | Monday 05 January 2026 01:13:48 +0000 (0:00:00.158) 0:00:17.624 ******** 2026-01-05 01:13:56.065634 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:13:56.066569 | orchestrator | 2026-01-05 01:13:56.066607 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-05 01:13:56.066616 | orchestrator | Monday 05 January 2026 01:13:49 +0000 (0:00:00.166) 0:00:17.790 ******** 2026-01-05 01:13:56.066623 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:13:56.066629 | orchestrator | 2026-01-05 01:13:56.066636 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-05 01:13:56.066649 | orchestrator | Monday 05 January 2026 01:13:49 +0000 (0:00:00.168) 0:00:17.959 ******** 2026-01-05 01:13:56.066655 | orchestrator | ok: [testbed-node-3] => { 2026-01-05 01:13:56.066663 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-05 01:13:56.066669 | orchestrator | } 2026-01-05 01:13:56.066675 | orchestrator | 2026-01-05 01:13:56.066681 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-05 01:13:56.066711 | orchestrator | Monday 05 January 2026 01:13:49 +0000 (0:00:00.175) 0:00:18.134 ******** 2026-01-05 01:13:56.066718 | orchestrator | ok: [testbed-node-3] => { 2026-01-05 01:13:56.066725 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-05 01:13:56.066732 | orchestrator | } 2026-01-05 01:13:56.066739 | orchestrator | 2026-01-05 01:13:56.066746 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-05 01:13:56.066753 | orchestrator | Monday 05 January 2026 01:13:49 +0000 (0:00:00.156) 0:00:18.291 ******** 2026-01-05 01:13:56.066761 | orchestrator | ok: [testbed-node-3] => { 2026-01-05 01:13:56.066768 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-05 01:13:56.066775 | orchestrator | } 2026-01-05 01:13:56.066806 | orchestrator | 2026-01-05 01:13:56.066813 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-05 01:13:56.066820 | orchestrator | Monday 05 January 2026 01:13:49 +0000 (0:00:00.155) 0:00:18.447 ******** 2026-01-05 01:13:56.066826 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:13:56.066833 | orchestrator | 2026-01-05 01:13:56.066839 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-05 01:13:56.066846 | orchestrator | Monday 05 January 2026 01:13:50 +0000 (0:00:00.758) 0:00:19.205 ******** 2026-01-05 01:13:56.066853 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:13:56.066858 | orchestrator | 2026-01-05 01:13:56.066865 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-05 01:13:56.066871 | orchestrator | Monday 05 January 2026 01:13:51 +0000 (0:00:00.540) 0:00:19.746 ******** 2026-01-05 01:13:56.066878 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:13:56.066884 | orchestrator | 2026-01-05 01:13:56.066890 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-05 01:13:56.066897 | orchestrator | Monday 05 January 2026 01:13:51 +0000 (0:00:00.546) 0:00:20.293 ******** 2026-01-05 01:13:56.066903 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:13:56.066909 | orchestrator | 2026-01-05 01:13:56.066916 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-05 01:13:56.066922 | orchestrator | Monday 05 January 2026 01:13:51 +0000 (0:00:00.367) 0:00:20.661 ******** 2026-01-05 01:13:56.066929 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:13:56.066935 | orchestrator | 2026-01-05 01:13:56.066941 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-05 01:13:56.066947 | orchestrator | Monday 05 January 2026 01:13:52 +0000 (0:00:00.127) 0:00:20.788 ******** 2026-01-05 01:13:56.066953 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:13:56.066959 | orchestrator | 2026-01-05 01:13:56.066965 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-05 01:13:56.066972 | orchestrator | Monday 05 January 2026 01:13:52 +0000 (0:00:00.143) 0:00:20.932 ******** 2026-01-05 01:13:56.066978 | orchestrator | ok: [testbed-node-3] => { 2026-01-05 01:13:56.066984 | orchestrator |  "vgs_report": { 2026-01-05 01:13:56.066990 | orchestrator |  "vg": [] 2026-01-05 01:13:56.066997 | orchestrator |  } 2026-01-05 01:13:56.067003 | orchestrator | } 2026-01-05 01:13:56.067009 | orchestrator | 2026-01-05 01:13:56.067016 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-05 01:13:56.067022 | orchestrator | Monday 05 January 2026 01:13:52 +0000 (0:00:00.193) 0:00:21.126 ******** 2026-01-05 01:13:56.067028 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:13:56.067034 | orchestrator | 2026-01-05 01:13:56.067040 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-05 01:13:56.067046 | orchestrator | Monday 05 January 2026 01:13:52 +0000 (0:00:00.144) 0:00:21.270 ******** 2026-01-05 01:13:56.067052 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:13:56.067058 | orchestrator | 2026-01-05 01:13:56.067064 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-05 01:13:56.067071 | orchestrator | Monday 05 January 2026 01:13:52 +0000 (0:00:00.146) 0:00:21.417 ******** 2026-01-05 01:13:56.067077 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:13:56.067083 | orchestrator | 2026-01-05 01:13:56.067089 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-05 01:13:56.067095 | orchestrator | Monday 05 January 2026 01:13:52 +0000 (0:00:00.153) 0:00:21.570 ******** 2026-01-05 01:13:56.067101 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:13:56.067108 | orchestrator | 2026-01-05 01:13:56.067114 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-05 01:13:56.067120 | orchestrator | Monday 05 January 2026 01:13:53 +0000 (0:00:00.145) 0:00:21.716 ******** 2026-01-05 01:13:56.067126 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:13:56.067132 | orchestrator | 2026-01-05 01:13:56.067138 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-05 01:13:56.067150 | orchestrator | Monday 05 January 2026 01:13:53 +0000 (0:00:00.176) 0:00:21.892 ******** 2026-01-05 01:13:56.067156 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:13:56.067162 | orchestrator | 2026-01-05 01:13:56.067181 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-05 01:13:56.067187 | orchestrator | Monday 05 January 2026 01:13:53 +0000 (0:00:00.162) 0:00:22.055 ******** 2026-01-05 01:13:56.067193 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:13:56.067199 | orchestrator | 2026-01-05 01:13:56.067205 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-05 01:13:56.067211 | orchestrator | Monday 05 January 2026 01:13:53 +0000 (0:00:00.148) 0:00:22.203 ******** 2026-01-05 01:13:56.067234 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:13:56.067241 | orchestrator | 2026-01-05 01:13:56.067247 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-05 01:13:56.067253 | orchestrator | Monday 05 January 2026 01:13:53 +0000 (0:00:00.404) 0:00:22.608 ******** 2026-01-05 01:13:56.067259 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:13:56.067265 | orchestrator | 2026-01-05 01:13:56.067271 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-05 01:13:56.067278 | orchestrator | Monday 05 January 2026 01:13:54 +0000 (0:00:00.152) 0:00:22.761 ******** 2026-01-05 01:13:56.067284 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:13:56.067290 | orchestrator | 2026-01-05 01:13:56.067297 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-05 01:13:56.067303 | orchestrator | Monday 05 January 2026 01:13:54 +0000 (0:00:00.160) 0:00:22.921 ******** 2026-01-05 01:13:56.067309 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:13:56.067316 | orchestrator | 2026-01-05 01:13:56.067323 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-05 01:13:56.067329 | orchestrator | Monday 05 January 2026 01:13:54 +0000 (0:00:00.155) 0:00:23.077 ******** 2026-01-05 01:13:56.067335 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:13:56.067341 | orchestrator | 2026-01-05 01:13:56.067347 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-05 01:13:56.067354 | orchestrator | Monday 05 January 2026 01:13:54 +0000 (0:00:00.170) 0:00:23.247 ******** 2026-01-05 01:13:56.067359 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:13:56.067365 | orchestrator | 2026-01-05 01:13:56.067371 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-05 01:13:56.067377 | orchestrator | Monday 05 January 2026 01:13:54 +0000 (0:00:00.148) 0:00:23.396 ******** 2026-01-05 01:13:56.067383 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:13:56.067388 | orchestrator | 2026-01-05 01:13:56.067394 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-05 01:13:56.067400 | orchestrator | Monday 05 January 2026 01:13:54 +0000 (0:00:00.140) 0:00:23.536 ******** 2026-01-05 01:13:56.067408 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b63b326-8bb9-546b-aabb-a628fef076ec', 'data_vg': 'ceph-9b63b326-8bb9-546b-aabb-a628fef076ec'})  2026-01-05 01:13:56.067416 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b6ae7fca-c2f2-5e20-af6f-426bd4b4cc4c', 'data_vg': 'ceph-b6ae7fca-c2f2-5e20-af6f-426bd4b4cc4c'})  2026-01-05 01:13:56.067422 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:13:56.067428 | orchestrator | 2026-01-05 01:13:56.067434 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-05 01:13:56.067440 | orchestrator | Monday 05 January 2026 01:13:55 +0000 (0:00:00.168) 0:00:23.705 ******** 2026-01-05 01:13:56.067446 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b63b326-8bb9-546b-aabb-a628fef076ec', 'data_vg': 'ceph-9b63b326-8bb9-546b-aabb-a628fef076ec'})  2026-01-05 01:13:56.067452 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b6ae7fca-c2f2-5e20-af6f-426bd4b4cc4c', 'data_vg': 'ceph-b6ae7fca-c2f2-5e20-af6f-426bd4b4cc4c'})  2026-01-05 01:13:56.067463 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:13:56.067468 | orchestrator | 2026-01-05 01:13:56.067474 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-05 01:13:56.067480 | orchestrator | Monday 05 January 2026 01:13:55 +0000 (0:00:00.171) 0:00:23.876 ******** 2026-01-05 01:13:56.067486 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b63b326-8bb9-546b-aabb-a628fef076ec', 'data_vg': 'ceph-9b63b326-8bb9-546b-aabb-a628fef076ec'})  2026-01-05 01:13:56.067491 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b6ae7fca-c2f2-5e20-af6f-426bd4b4cc4c', 'data_vg': 'ceph-b6ae7fca-c2f2-5e20-af6f-426bd4b4cc4c'})  2026-01-05 01:13:56.067498 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:13:56.067503 | orchestrator | 2026-01-05 01:13:56.067509 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-05 01:13:56.067515 | orchestrator | Monday 05 January 2026 01:13:55 +0000 (0:00:00.164) 0:00:24.041 ******** 2026-01-05 01:13:56.067521 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b63b326-8bb9-546b-aabb-a628fef076ec', 'data_vg': 'ceph-9b63b326-8bb9-546b-aabb-a628fef076ec'})  2026-01-05 01:13:56.067527 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b6ae7fca-c2f2-5e20-af6f-426bd4b4cc4c', 'data_vg': 'ceph-b6ae7fca-c2f2-5e20-af6f-426bd4b4cc4c'})  2026-01-05 01:13:56.067533 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:13:56.067538 | orchestrator | 2026-01-05 01:13:56.067545 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-05 01:13:56.067550 | orchestrator | Monday 05 January 2026 01:13:55 +0000 (0:00:00.161) 0:00:24.202 ******** 2026-01-05 01:13:56.067556 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b63b326-8bb9-546b-aabb-a628fef076ec', 'data_vg': 'ceph-9b63b326-8bb9-546b-aabb-a628fef076ec'})  2026-01-05 01:13:56.067567 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b6ae7fca-c2f2-5e20-af6f-426bd4b4cc4c', 'data_vg': 'ceph-b6ae7fca-c2f2-5e20-af6f-426bd4b4cc4c'})  2026-01-05 01:13:56.067573 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:13:56.067579 | orchestrator | 2026-01-05 01:13:56.067585 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-05 01:13:56.067590 | orchestrator | Monday 05 January 2026 01:13:55 +0000 (0:00:00.387) 0:00:24.590 ******** 2026-01-05 01:13:56.067601 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b63b326-8bb9-546b-aabb-a628fef076ec', 'data_vg': 'ceph-9b63b326-8bb9-546b-aabb-a628fef076ec'})  2026-01-05 01:14:01.923501 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b6ae7fca-c2f2-5e20-af6f-426bd4b4cc4c', 'data_vg': 'ceph-b6ae7fca-c2f2-5e20-af6f-426bd4b4cc4c'})  2026-01-05 01:14:01.923591 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:14:01.923599 | orchestrator | 2026-01-05 01:14:01.923604 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-05 01:14:01.923610 | orchestrator | Monday 05 January 2026 01:13:56 +0000 (0:00:00.172) 0:00:24.763 ******** 2026-01-05 01:14:01.923614 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b63b326-8bb9-546b-aabb-a628fef076ec', 'data_vg': 'ceph-9b63b326-8bb9-546b-aabb-a628fef076ec'})  2026-01-05 01:14:01.923620 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b6ae7fca-c2f2-5e20-af6f-426bd4b4cc4c', 'data_vg': 'ceph-b6ae7fca-c2f2-5e20-af6f-426bd4b4cc4c'})  2026-01-05 01:14:01.923624 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:14:01.923627 | orchestrator | 2026-01-05 01:14:01.923632 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-05 01:14:01.923636 | orchestrator | Monday 05 January 2026 01:13:56 +0000 (0:00:00.191) 0:00:24.954 ******** 2026-01-05 01:14:01.923640 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b63b326-8bb9-546b-aabb-a628fef076ec', 'data_vg': 'ceph-9b63b326-8bb9-546b-aabb-a628fef076ec'})  2026-01-05 01:14:01.923644 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b6ae7fca-c2f2-5e20-af6f-426bd4b4cc4c', 'data_vg': 'ceph-b6ae7fca-c2f2-5e20-af6f-426bd4b4cc4c'})  2026-01-05 01:14:01.923672 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:14:01.923693 | orchestrator | 2026-01-05 01:14:01.923699 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-05 01:14:01.923705 | orchestrator | Monday 05 January 2026 01:13:56 +0000 (0:00:00.167) 0:00:25.122 ******** 2026-01-05 01:14:01.923710 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:14:01.923718 | orchestrator | 2026-01-05 01:14:01.923724 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-05 01:14:01.923729 | orchestrator | Monday 05 January 2026 01:13:57 +0000 (0:00:00.593) 0:00:25.715 ******** 2026-01-05 01:14:01.923735 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:14:01.923740 | orchestrator | 2026-01-05 01:14:01.923746 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-05 01:14:01.923752 | orchestrator | Monday 05 January 2026 01:13:57 +0000 (0:00:00.541) 0:00:26.257 ******** 2026-01-05 01:14:01.923757 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:14:01.923764 | orchestrator | 2026-01-05 01:14:01.923770 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-05 01:14:01.923776 | orchestrator | Monday 05 January 2026 01:13:57 +0000 (0:00:00.154) 0:00:26.411 ******** 2026-01-05 01:14:01.923782 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-9b63b326-8bb9-546b-aabb-a628fef076ec', 'vg_name': 'ceph-9b63b326-8bb9-546b-aabb-a628fef076ec'}) 2026-01-05 01:14:01.923790 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-b6ae7fca-c2f2-5e20-af6f-426bd4b4cc4c', 'vg_name': 'ceph-b6ae7fca-c2f2-5e20-af6f-426bd4b4cc4c'}) 2026-01-05 01:14:01.923796 | orchestrator | 2026-01-05 01:14:01.923803 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-05 01:14:01.923809 | orchestrator | Monday 05 January 2026 01:13:57 +0000 (0:00:00.284) 0:00:26.696 ******** 2026-01-05 01:14:01.923815 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b63b326-8bb9-546b-aabb-a628fef076ec', 'data_vg': 'ceph-9b63b326-8bb9-546b-aabb-a628fef076ec'})  2026-01-05 01:14:01.923821 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b6ae7fca-c2f2-5e20-af6f-426bd4b4cc4c', 'data_vg': 'ceph-b6ae7fca-c2f2-5e20-af6f-426bd4b4cc4c'})  2026-01-05 01:14:01.923828 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:14:01.923832 | orchestrator | 2026-01-05 01:14:01.923836 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-05 01:14:01.923840 | orchestrator | Monday 05 January 2026 01:13:58 +0000 (0:00:00.171) 0:00:26.867 ******** 2026-01-05 01:14:01.923844 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b63b326-8bb9-546b-aabb-a628fef076ec', 'data_vg': 'ceph-9b63b326-8bb9-546b-aabb-a628fef076ec'})  2026-01-05 01:14:01.923848 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b6ae7fca-c2f2-5e20-af6f-426bd4b4cc4c', 'data_vg': 'ceph-b6ae7fca-c2f2-5e20-af6f-426bd4b4cc4c'})  2026-01-05 01:14:01.923852 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:14:01.923855 | orchestrator | 2026-01-05 01:14:01.923859 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-05 01:14:01.923863 | orchestrator | Monday 05 January 2026 01:13:58 +0000 (0:00:00.224) 0:00:27.091 ******** 2026-01-05 01:14:01.923879 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b63b326-8bb9-546b-aabb-a628fef076ec', 'data_vg': 'ceph-9b63b326-8bb9-546b-aabb-a628fef076ec'})  2026-01-05 01:14:01.923883 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b6ae7fca-c2f2-5e20-af6f-426bd4b4cc4c', 'data_vg': 'ceph-b6ae7fca-c2f2-5e20-af6f-426bd4b4cc4c'})  2026-01-05 01:14:01.923886 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:14:01.923890 | orchestrator | 2026-01-05 01:14:01.923894 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-05 01:14:01.923898 | orchestrator | Monday 05 January 2026 01:13:58 +0000 (0:00:00.169) 0:00:27.261 ******** 2026-01-05 01:14:01.923923 | orchestrator | ok: [testbed-node-3] => { 2026-01-05 01:14:01.923929 | orchestrator |  "lvm_report": { 2026-01-05 01:14:01.923935 | orchestrator |  "lv": [ 2026-01-05 01:14:01.923941 | orchestrator |  { 2026-01-05 01:14:01.923948 | orchestrator |  "lv_name": "osd-block-9b63b326-8bb9-546b-aabb-a628fef076ec", 2026-01-05 01:14:01.923955 | orchestrator |  "vg_name": "ceph-9b63b326-8bb9-546b-aabb-a628fef076ec" 2026-01-05 01:14:01.923962 | orchestrator |  }, 2026-01-05 01:14:01.923968 | orchestrator |  { 2026-01-05 01:14:01.923974 | orchestrator |  "lv_name": "osd-block-b6ae7fca-c2f2-5e20-af6f-426bd4b4cc4c", 2026-01-05 01:14:01.923981 | orchestrator |  "vg_name": "ceph-b6ae7fca-c2f2-5e20-af6f-426bd4b4cc4c" 2026-01-05 01:14:01.923987 | orchestrator |  } 2026-01-05 01:14:01.923993 | orchestrator |  ], 2026-01-05 01:14:01.923999 | orchestrator |  "pv": [ 2026-01-05 01:14:01.924006 | orchestrator |  { 2026-01-05 01:14:01.924010 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-05 01:14:01.924014 | orchestrator |  "vg_name": "ceph-9b63b326-8bb9-546b-aabb-a628fef076ec" 2026-01-05 01:14:01.924017 | orchestrator |  }, 2026-01-05 01:14:01.924021 | orchestrator |  { 2026-01-05 01:14:01.924025 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-05 01:14:01.924029 | orchestrator |  "vg_name": "ceph-b6ae7fca-c2f2-5e20-af6f-426bd4b4cc4c" 2026-01-05 01:14:01.924033 | orchestrator |  } 2026-01-05 01:14:01.924037 | orchestrator |  ] 2026-01-05 01:14:01.924041 | orchestrator |  } 2026-01-05 01:14:01.924046 | orchestrator | } 2026-01-05 01:14:01.924051 | orchestrator | 2026-01-05 01:14:01.924056 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-05 01:14:01.924061 | orchestrator | 2026-01-05 01:14:01.924065 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-05 01:14:01.924070 | orchestrator | Monday 05 January 2026 01:13:59 +0000 (0:00:00.525) 0:00:27.786 ******** 2026-01-05 01:14:01.924074 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-05 01:14:01.924079 | orchestrator | 2026-01-05 01:14:01.924084 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-05 01:14:01.924088 | orchestrator | Monday 05 January 2026 01:13:59 +0000 (0:00:00.283) 0:00:28.070 ******** 2026-01-05 01:14:01.924093 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:14:01.924097 | orchestrator | 2026-01-05 01:14:01.924102 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:14:01.924106 | orchestrator | Monday 05 January 2026 01:13:59 +0000 (0:00:00.250) 0:00:28.321 ******** 2026-01-05 01:14:01.924111 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-01-05 01:14:01.924116 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-01-05 01:14:01.924120 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-01-05 01:14:01.924125 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-01-05 01:14:01.924129 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-01-05 01:14:01.924134 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-01-05 01:14:01.924138 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-01-05 01:14:01.924143 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-01-05 01:14:01.924147 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-01-05 01:14:01.924152 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-01-05 01:14:01.924156 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-01-05 01:14:01.924165 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-01-05 01:14:01.924169 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-01-05 01:14:01.924174 | orchestrator | 2026-01-05 01:14:01.924178 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:14:01.924183 | orchestrator | Monday 05 January 2026 01:14:00 +0000 (0:00:00.436) 0:00:28.757 ******** 2026-01-05 01:14:01.924188 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:14:01.924196 | orchestrator | 2026-01-05 01:14:01.924205 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:14:01.924211 | orchestrator | Monday 05 January 2026 01:14:00 +0000 (0:00:00.230) 0:00:28.987 ******** 2026-01-05 01:14:01.924217 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:14:01.924223 | orchestrator | 2026-01-05 01:14:01.924229 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:14:01.924235 | orchestrator | Monday 05 January 2026 01:14:00 +0000 (0:00:00.245) 0:00:29.233 ******** 2026-01-05 01:14:01.924241 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:14:01.924246 | orchestrator | 2026-01-05 01:14:01.924252 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:14:01.924258 | orchestrator | Monday 05 January 2026 01:14:00 +0000 (0:00:00.251) 0:00:29.484 ******** 2026-01-05 01:14:01.924265 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:14:01.924271 | orchestrator | 2026-01-05 01:14:01.924282 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:14:01.924288 | orchestrator | Monday 05 January 2026 01:14:01 +0000 (0:00:00.225) 0:00:29.710 ******** 2026-01-05 01:14:01.924295 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:14:01.924301 | orchestrator | 2026-01-05 01:14:01.924308 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:14:01.924317 | orchestrator | Monday 05 January 2026 01:14:01 +0000 (0:00:00.252) 0:00:29.963 ******** 2026-01-05 01:14:01.924324 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:14:01.924330 | orchestrator | 2026-01-05 01:14:01.924343 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:14:13.471001 | orchestrator | Monday 05 January 2026 01:14:01 +0000 (0:00:00.658) 0:00:30.622 ******** 2026-01-05 01:14:13.471087 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:14:13.471095 | orchestrator | 2026-01-05 01:14:13.471101 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:14:13.471106 | orchestrator | Monday 05 January 2026 01:14:02 +0000 (0:00:00.227) 0:00:30.849 ******** 2026-01-05 01:14:13.471112 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:14:13.471117 | orchestrator | 2026-01-05 01:14:13.471122 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:14:13.471127 | orchestrator | Monday 05 January 2026 01:14:02 +0000 (0:00:00.233) 0:00:31.083 ******** 2026-01-05 01:14:13.471132 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_3250b0f8-cf47-4b18-9931-22a1ebe34c49) 2026-01-05 01:14:13.471139 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_3250b0f8-cf47-4b18-9931-22a1ebe34c49) 2026-01-05 01:14:13.471144 | orchestrator | 2026-01-05 01:14:13.471148 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:14:13.471153 | orchestrator | Monday 05 January 2026 01:14:02 +0000 (0:00:00.457) 0:00:31.540 ******** 2026-01-05 01:14:13.471158 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9f2df327-5b12-4442-ac27-592210953f70) 2026-01-05 01:14:13.471162 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9f2df327-5b12-4442-ac27-592210953f70) 2026-01-05 01:14:13.471167 | orchestrator | 2026-01-05 01:14:13.471171 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:14:13.471177 | orchestrator | Monday 05 January 2026 01:14:03 +0000 (0:00:00.452) 0:00:31.993 ******** 2026-01-05 01:14:13.471181 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_ead21d4d-eccd-4cd4-b0bf-ce9a2f7ae522) 2026-01-05 01:14:13.471202 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_ead21d4d-eccd-4cd4-b0bf-ce9a2f7ae522) 2026-01-05 01:14:13.471210 | orchestrator | 2026-01-05 01:14:13.471219 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:14:13.471229 | orchestrator | Monday 05 January 2026 01:14:03 +0000 (0:00:00.465) 0:00:32.459 ******** 2026-01-05 01:14:13.471237 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_6e0b145f-2bfd-4824-bc37-4d4082c6f3f3) 2026-01-05 01:14:13.471244 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_6e0b145f-2bfd-4824-bc37-4d4082c6f3f3) 2026-01-05 01:14:13.471251 | orchestrator | 2026-01-05 01:14:13.471258 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:14:13.471265 | orchestrator | Monday 05 January 2026 01:14:04 +0000 (0:00:00.454) 0:00:32.913 ******** 2026-01-05 01:14:13.471272 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-05 01:14:13.471279 | orchestrator | 2026-01-05 01:14:13.471287 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:14:13.471294 | orchestrator | Monday 05 January 2026 01:14:04 +0000 (0:00:00.344) 0:00:33.258 ******** 2026-01-05 01:14:13.471301 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-01-05 01:14:13.471310 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-01-05 01:14:13.471316 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-01-05 01:14:13.471320 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-01-05 01:14:13.471325 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-01-05 01:14:13.471329 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-01-05 01:14:13.471334 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-01-05 01:14:13.471339 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-01-05 01:14:13.471343 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-01-05 01:14:13.471348 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-01-05 01:14:13.471352 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-01-05 01:14:13.471357 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-01-05 01:14:13.471361 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-01-05 01:14:13.471366 | orchestrator | 2026-01-05 01:14:13.471370 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:14:13.471375 | orchestrator | Monday 05 January 2026 01:14:05 +0000 (0:00:00.446) 0:00:33.704 ******** 2026-01-05 01:14:13.471385 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:14:13.471391 | orchestrator | 2026-01-05 01:14:13.471396 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:14:13.471400 | orchestrator | Monday 05 January 2026 01:14:05 +0000 (0:00:00.252) 0:00:33.956 ******** 2026-01-05 01:14:13.471405 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:14:13.471409 | orchestrator | 2026-01-05 01:14:13.471414 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:14:13.471419 | orchestrator | Monday 05 January 2026 01:14:05 +0000 (0:00:00.734) 0:00:34.691 ******** 2026-01-05 01:14:13.471423 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:14:13.471428 | orchestrator | 2026-01-05 01:14:13.471450 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:14:13.471461 | orchestrator | Monday 05 January 2026 01:14:06 +0000 (0:00:00.226) 0:00:34.918 ******** 2026-01-05 01:14:13.471476 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:14:13.471484 | orchestrator | 2026-01-05 01:14:13.471491 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:14:13.471498 | orchestrator | Monday 05 January 2026 01:14:06 +0000 (0:00:00.209) 0:00:35.128 ******** 2026-01-05 01:14:13.471506 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:14:13.471514 | orchestrator | 2026-01-05 01:14:13.471519 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:14:13.471523 | orchestrator | Monday 05 January 2026 01:14:06 +0000 (0:00:00.215) 0:00:35.343 ******** 2026-01-05 01:14:13.471528 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:14:13.471532 | orchestrator | 2026-01-05 01:14:13.471537 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:14:13.471541 | orchestrator | Monday 05 January 2026 01:14:06 +0000 (0:00:00.215) 0:00:35.559 ******** 2026-01-05 01:14:13.471546 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:14:13.471551 | orchestrator | 2026-01-05 01:14:13.471556 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:14:13.471562 | orchestrator | Monday 05 January 2026 01:14:07 +0000 (0:00:00.228) 0:00:35.787 ******** 2026-01-05 01:14:13.471568 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:14:13.471573 | orchestrator | 2026-01-05 01:14:13.471578 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:14:13.471584 | orchestrator | Monday 05 January 2026 01:14:07 +0000 (0:00:00.233) 0:00:36.021 ******** 2026-01-05 01:14:13.471589 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-01-05 01:14:13.471595 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-01-05 01:14:13.471601 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-01-05 01:14:13.471606 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-01-05 01:14:13.471613 | orchestrator | 2026-01-05 01:14:13.471620 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:14:13.471627 | orchestrator | Monday 05 January 2026 01:14:08 +0000 (0:00:00.868) 0:00:36.890 ******** 2026-01-05 01:14:13.471638 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:14:13.471646 | orchestrator | 2026-01-05 01:14:13.471653 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:14:13.471660 | orchestrator | Monday 05 January 2026 01:14:08 +0000 (0:00:00.230) 0:00:37.120 ******** 2026-01-05 01:14:13.471686 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:14:13.471694 | orchestrator | 2026-01-05 01:14:13.471701 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:14:13.471709 | orchestrator | Monday 05 January 2026 01:14:08 +0000 (0:00:00.212) 0:00:37.333 ******** 2026-01-05 01:14:13.471716 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:14:13.471722 | orchestrator | 2026-01-05 01:14:13.471729 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:14:13.471735 | orchestrator | Monday 05 January 2026 01:14:08 +0000 (0:00:00.212) 0:00:37.546 ******** 2026-01-05 01:14:13.471742 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:14:13.471749 | orchestrator | 2026-01-05 01:14:13.471756 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-05 01:14:13.471763 | orchestrator | Monday 05 January 2026 01:14:09 +0000 (0:00:00.490) 0:00:38.036 ******** 2026-01-05 01:14:13.471771 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:14:13.471779 | orchestrator | 2026-01-05 01:14:13.471786 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-05 01:14:13.471794 | orchestrator | Monday 05 January 2026 01:14:09 +0000 (0:00:00.393) 0:00:38.429 ******** 2026-01-05 01:14:13.471801 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'cc420972-ce44-5a44-a5a6-a707e77471c5'}}) 2026-01-05 01:14:13.471806 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '62cfaa39-e4fc-5ede-b6ae-ee7ea3f2ad3e'}}) 2026-01-05 01:14:13.471817 | orchestrator | 2026-01-05 01:14:13.471821 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-05 01:14:13.471826 | orchestrator | Monday 05 January 2026 01:14:09 +0000 (0:00:00.217) 0:00:38.647 ******** 2026-01-05 01:14:13.471832 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-cc420972-ce44-5a44-a5a6-a707e77471c5', 'data_vg': 'ceph-cc420972-ce44-5a44-a5a6-a707e77471c5'}) 2026-01-05 01:14:13.471839 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-62cfaa39-e4fc-5ede-b6ae-ee7ea3f2ad3e', 'data_vg': 'ceph-62cfaa39-e4fc-5ede-b6ae-ee7ea3f2ad3e'}) 2026-01-05 01:14:13.471843 | orchestrator | 2026-01-05 01:14:13.471848 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-05 01:14:13.471852 | orchestrator | Monday 05 January 2026 01:14:11 +0000 (0:00:01.902) 0:00:40.549 ******** 2026-01-05 01:14:13.471857 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cc420972-ce44-5a44-a5a6-a707e77471c5', 'data_vg': 'ceph-cc420972-ce44-5a44-a5a6-a707e77471c5'})  2026-01-05 01:14:13.471867 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-62cfaa39-e4fc-5ede-b6ae-ee7ea3f2ad3e', 'data_vg': 'ceph-62cfaa39-e4fc-5ede-b6ae-ee7ea3f2ad3e'})  2026-01-05 01:14:13.471872 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:14:13.471876 | orchestrator | 2026-01-05 01:14:13.471881 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-05 01:14:13.471885 | orchestrator | Monday 05 January 2026 01:14:12 +0000 (0:00:00.196) 0:00:40.747 ******** 2026-01-05 01:14:13.471890 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-cc420972-ce44-5a44-a5a6-a707e77471c5', 'data_vg': 'ceph-cc420972-ce44-5a44-a5a6-a707e77471c5'}) 2026-01-05 01:14:13.471900 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-62cfaa39-e4fc-5ede-b6ae-ee7ea3f2ad3e', 'data_vg': 'ceph-62cfaa39-e4fc-5ede-b6ae-ee7ea3f2ad3e'}) 2026-01-05 01:14:19.538262 | orchestrator | 2026-01-05 01:14:19.538361 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-05 01:14:19.538370 | orchestrator | Monday 05 January 2026 01:14:13 +0000 (0:00:01.416) 0:00:42.163 ******** 2026-01-05 01:14:19.538376 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cc420972-ce44-5a44-a5a6-a707e77471c5', 'data_vg': 'ceph-cc420972-ce44-5a44-a5a6-a707e77471c5'})  2026-01-05 01:14:19.538382 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-62cfaa39-e4fc-5ede-b6ae-ee7ea3f2ad3e', 'data_vg': 'ceph-62cfaa39-e4fc-5ede-b6ae-ee7ea3f2ad3e'})  2026-01-05 01:14:19.538398 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:14:19.538409 | orchestrator | 2026-01-05 01:14:19.538414 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-05 01:14:19.538419 | orchestrator | Monday 05 January 2026 01:14:13 +0000 (0:00:00.164) 0:00:42.328 ******** 2026-01-05 01:14:19.538424 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:14:19.538428 | orchestrator | 2026-01-05 01:14:19.538433 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-05 01:14:19.538438 | orchestrator | Monday 05 January 2026 01:14:13 +0000 (0:00:00.150) 0:00:42.478 ******** 2026-01-05 01:14:19.538443 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cc420972-ce44-5a44-a5a6-a707e77471c5', 'data_vg': 'ceph-cc420972-ce44-5a44-a5a6-a707e77471c5'})  2026-01-05 01:14:19.538447 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-62cfaa39-e4fc-5ede-b6ae-ee7ea3f2ad3e', 'data_vg': 'ceph-62cfaa39-e4fc-5ede-b6ae-ee7ea3f2ad3e'})  2026-01-05 01:14:19.538452 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:14:19.538459 | orchestrator | 2026-01-05 01:14:19.538466 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-05 01:14:19.538473 | orchestrator | Monday 05 January 2026 01:14:13 +0000 (0:00:00.170) 0:00:42.649 ******** 2026-01-05 01:14:19.538480 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:14:19.538487 | orchestrator | 2026-01-05 01:14:19.538494 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-05 01:14:19.538521 | orchestrator | Monday 05 January 2026 01:14:14 +0000 (0:00:00.157) 0:00:42.806 ******** 2026-01-05 01:14:19.538529 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cc420972-ce44-5a44-a5a6-a707e77471c5', 'data_vg': 'ceph-cc420972-ce44-5a44-a5a6-a707e77471c5'})  2026-01-05 01:14:19.538537 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-62cfaa39-e4fc-5ede-b6ae-ee7ea3f2ad3e', 'data_vg': 'ceph-62cfaa39-e4fc-5ede-b6ae-ee7ea3f2ad3e'})  2026-01-05 01:14:19.538544 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:14:19.538551 | orchestrator | 2026-01-05 01:14:19.538558 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-05 01:14:19.538565 | orchestrator | Monday 05 January 2026 01:14:14 +0000 (0:00:00.166) 0:00:42.973 ******** 2026-01-05 01:14:19.538572 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:14:19.538579 | orchestrator | 2026-01-05 01:14:19.538588 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-05 01:14:19.538593 | orchestrator | Monday 05 January 2026 01:14:14 +0000 (0:00:00.156) 0:00:43.129 ******** 2026-01-05 01:14:19.538597 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cc420972-ce44-5a44-a5a6-a707e77471c5', 'data_vg': 'ceph-cc420972-ce44-5a44-a5a6-a707e77471c5'})  2026-01-05 01:14:19.538602 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-62cfaa39-e4fc-5ede-b6ae-ee7ea3f2ad3e', 'data_vg': 'ceph-62cfaa39-e4fc-5ede-b6ae-ee7ea3f2ad3e'})  2026-01-05 01:14:19.538606 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:14:19.538610 | orchestrator | 2026-01-05 01:14:19.538615 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-05 01:14:19.538619 | orchestrator | Monday 05 January 2026 01:14:14 +0000 (0:00:00.396) 0:00:43.526 ******** 2026-01-05 01:14:19.538624 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:14:19.538629 | orchestrator | 2026-01-05 01:14:19.538633 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-05 01:14:19.538638 | orchestrator | Monday 05 January 2026 01:14:14 +0000 (0:00:00.134) 0:00:43.661 ******** 2026-01-05 01:14:19.538642 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cc420972-ce44-5a44-a5a6-a707e77471c5', 'data_vg': 'ceph-cc420972-ce44-5a44-a5a6-a707e77471c5'})  2026-01-05 01:14:19.538646 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-62cfaa39-e4fc-5ede-b6ae-ee7ea3f2ad3e', 'data_vg': 'ceph-62cfaa39-e4fc-5ede-b6ae-ee7ea3f2ad3e'})  2026-01-05 01:14:19.538651 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:14:19.538683 | orchestrator | 2026-01-05 01:14:19.538698 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-05 01:14:19.538703 | orchestrator | Monday 05 January 2026 01:14:15 +0000 (0:00:00.165) 0:00:43.827 ******** 2026-01-05 01:14:19.538708 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cc420972-ce44-5a44-a5a6-a707e77471c5', 'data_vg': 'ceph-cc420972-ce44-5a44-a5a6-a707e77471c5'})  2026-01-05 01:14:19.538712 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-62cfaa39-e4fc-5ede-b6ae-ee7ea3f2ad3e', 'data_vg': 'ceph-62cfaa39-e4fc-5ede-b6ae-ee7ea3f2ad3e'})  2026-01-05 01:14:19.538717 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:14:19.538721 | orchestrator | 2026-01-05 01:14:19.538726 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-05 01:14:19.538744 | orchestrator | Monday 05 January 2026 01:14:15 +0000 (0:00:00.163) 0:00:43.990 ******** 2026-01-05 01:14:19.538749 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cc420972-ce44-5a44-a5a6-a707e77471c5', 'data_vg': 'ceph-cc420972-ce44-5a44-a5a6-a707e77471c5'})  2026-01-05 01:14:19.538754 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-62cfaa39-e4fc-5ede-b6ae-ee7ea3f2ad3e', 'data_vg': 'ceph-62cfaa39-e4fc-5ede-b6ae-ee7ea3f2ad3e'})  2026-01-05 01:14:19.538758 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:14:19.538763 | orchestrator | 2026-01-05 01:14:19.538772 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-05 01:14:19.538777 | orchestrator | Monday 05 January 2026 01:14:15 +0000 (0:00:00.199) 0:00:44.190 ******** 2026-01-05 01:14:19.538781 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:14:19.538786 | orchestrator | 2026-01-05 01:14:19.538790 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-05 01:14:19.538795 | orchestrator | Monday 05 January 2026 01:14:15 +0000 (0:00:00.151) 0:00:44.342 ******** 2026-01-05 01:14:19.538799 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:14:19.538804 | orchestrator | 2026-01-05 01:14:19.538809 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-05 01:14:19.538814 | orchestrator | Monday 05 January 2026 01:14:15 +0000 (0:00:00.150) 0:00:44.493 ******** 2026-01-05 01:14:19.538820 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:14:19.538825 | orchestrator | 2026-01-05 01:14:19.538830 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-05 01:14:19.538835 | orchestrator | Monday 05 January 2026 01:14:15 +0000 (0:00:00.171) 0:00:44.664 ******** 2026-01-05 01:14:19.538840 | orchestrator | ok: [testbed-node-4] => { 2026-01-05 01:14:19.538846 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-05 01:14:19.538851 | orchestrator | } 2026-01-05 01:14:19.538857 | orchestrator | 2026-01-05 01:14:19.538862 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-05 01:14:19.538867 | orchestrator | Monday 05 January 2026 01:14:16 +0000 (0:00:00.191) 0:00:44.856 ******** 2026-01-05 01:14:19.538873 | orchestrator | ok: [testbed-node-4] => { 2026-01-05 01:14:19.538878 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-05 01:14:19.538883 | orchestrator | } 2026-01-05 01:14:19.538889 | orchestrator | 2026-01-05 01:14:19.538894 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-05 01:14:19.538899 | orchestrator | Monday 05 January 2026 01:14:16 +0000 (0:00:00.150) 0:00:45.006 ******** 2026-01-05 01:14:19.538904 | orchestrator | ok: [testbed-node-4] => { 2026-01-05 01:14:19.538910 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-05 01:14:19.538915 | orchestrator | } 2026-01-05 01:14:19.538920 | orchestrator | 2026-01-05 01:14:19.538925 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-05 01:14:19.538930 | orchestrator | Monday 05 January 2026 01:14:16 +0000 (0:00:00.157) 0:00:45.164 ******** 2026-01-05 01:14:19.538936 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:14:19.538941 | orchestrator | 2026-01-05 01:14:19.538946 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-05 01:14:19.538951 | orchestrator | Monday 05 January 2026 01:14:17 +0000 (0:00:00.585) 0:00:45.749 ******** 2026-01-05 01:14:19.538957 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:14:19.538962 | orchestrator | 2026-01-05 01:14:19.538968 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-05 01:14:19.538975 | orchestrator | Monday 05 January 2026 01:14:17 +0000 (0:00:00.742) 0:00:46.492 ******** 2026-01-05 01:14:19.538983 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:14:19.538990 | orchestrator | 2026-01-05 01:14:19.538997 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-05 01:14:19.539003 | orchestrator | Monday 05 January 2026 01:14:18 +0000 (0:00:00.552) 0:00:47.044 ******** 2026-01-05 01:14:19.539010 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:14:19.539017 | orchestrator | 2026-01-05 01:14:19.539024 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-05 01:14:19.539031 | orchestrator | Monday 05 January 2026 01:14:18 +0000 (0:00:00.165) 0:00:47.209 ******** 2026-01-05 01:14:19.539039 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:14:19.539047 | orchestrator | 2026-01-05 01:14:19.539055 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-05 01:14:19.539063 | orchestrator | Monday 05 January 2026 01:14:18 +0000 (0:00:00.125) 0:00:47.335 ******** 2026-01-05 01:14:19.539072 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:14:19.539085 | orchestrator | 2026-01-05 01:14:19.539092 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-05 01:14:19.539099 | orchestrator | Monday 05 January 2026 01:14:18 +0000 (0:00:00.124) 0:00:47.460 ******** 2026-01-05 01:14:19.539105 | orchestrator | ok: [testbed-node-4] => { 2026-01-05 01:14:19.539112 | orchestrator |  "vgs_report": { 2026-01-05 01:14:19.539120 | orchestrator |  "vg": [] 2026-01-05 01:14:19.539128 | orchestrator |  } 2026-01-05 01:14:19.539135 | orchestrator | } 2026-01-05 01:14:19.539142 | orchestrator | 2026-01-05 01:14:19.539149 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-05 01:14:19.539155 | orchestrator | Monday 05 January 2026 01:14:18 +0000 (0:00:00.155) 0:00:47.615 ******** 2026-01-05 01:14:19.539162 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:14:19.539168 | orchestrator | 2026-01-05 01:14:19.539181 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-05 01:14:19.539188 | orchestrator | Monday 05 January 2026 01:14:19 +0000 (0:00:00.159) 0:00:47.775 ******** 2026-01-05 01:14:19.539196 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:14:19.539204 | orchestrator | 2026-01-05 01:14:19.539212 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-05 01:14:19.539219 | orchestrator | Monday 05 January 2026 01:14:19 +0000 (0:00:00.162) 0:00:47.938 ******** 2026-01-05 01:14:19.539227 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:14:19.539233 | orchestrator | 2026-01-05 01:14:19.539240 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-05 01:14:19.539247 | orchestrator | Monday 05 January 2026 01:14:19 +0000 (0:00:00.147) 0:00:48.085 ******** 2026-01-05 01:14:19.539253 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:14:19.539259 | orchestrator | 2026-01-05 01:14:19.539273 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-05 01:14:24.739038 | orchestrator | Monday 05 January 2026 01:14:19 +0000 (0:00:00.149) 0:00:48.234 ******** 2026-01-05 01:14:24.739166 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:14:24.739188 | orchestrator | 2026-01-05 01:14:24.739205 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-05 01:14:24.739221 | orchestrator | Monday 05 January 2026 01:14:19 +0000 (0:00:00.157) 0:00:48.392 ******** 2026-01-05 01:14:24.739236 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:14:24.739251 | orchestrator | 2026-01-05 01:14:24.739266 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-05 01:14:24.739281 | orchestrator | Monday 05 January 2026 01:14:19 +0000 (0:00:00.150) 0:00:48.542 ******** 2026-01-05 01:14:24.739296 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:14:24.739311 | orchestrator | 2026-01-05 01:14:24.739326 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-05 01:14:24.739340 | orchestrator | Monday 05 January 2026 01:14:20 +0000 (0:00:00.385) 0:00:48.928 ******** 2026-01-05 01:14:24.739355 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:14:24.739370 | orchestrator | 2026-01-05 01:14:24.739384 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-05 01:14:24.739417 | orchestrator | Monday 05 January 2026 01:14:20 +0000 (0:00:00.149) 0:00:49.078 ******** 2026-01-05 01:14:24.739444 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:14:24.739459 | orchestrator | 2026-01-05 01:14:24.739474 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-05 01:14:24.739489 | orchestrator | Monday 05 January 2026 01:14:20 +0000 (0:00:00.165) 0:00:49.243 ******** 2026-01-05 01:14:24.739504 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:14:24.739520 | orchestrator | 2026-01-05 01:14:24.739534 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-05 01:14:24.739549 | orchestrator | Monday 05 January 2026 01:14:20 +0000 (0:00:00.156) 0:00:49.400 ******** 2026-01-05 01:14:24.739564 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:14:24.739579 | orchestrator | 2026-01-05 01:14:24.739594 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-05 01:14:24.739640 | orchestrator | Monday 05 January 2026 01:14:20 +0000 (0:00:00.152) 0:00:49.553 ******** 2026-01-05 01:14:24.739676 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:14:24.739690 | orchestrator | 2026-01-05 01:14:24.739705 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-05 01:14:24.739721 | orchestrator | Monday 05 January 2026 01:14:21 +0000 (0:00:00.173) 0:00:49.726 ******** 2026-01-05 01:14:24.739736 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:14:24.739745 | orchestrator | 2026-01-05 01:14:24.739754 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-05 01:14:24.739762 | orchestrator | Monday 05 January 2026 01:14:21 +0000 (0:00:00.138) 0:00:49.865 ******** 2026-01-05 01:14:24.739771 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:14:24.739780 | orchestrator | 2026-01-05 01:14:24.739789 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-05 01:14:24.739798 | orchestrator | Monday 05 January 2026 01:14:21 +0000 (0:00:00.150) 0:00:50.015 ******** 2026-01-05 01:14:24.739808 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cc420972-ce44-5a44-a5a6-a707e77471c5', 'data_vg': 'ceph-cc420972-ce44-5a44-a5a6-a707e77471c5'})  2026-01-05 01:14:24.739819 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-62cfaa39-e4fc-5ede-b6ae-ee7ea3f2ad3e', 'data_vg': 'ceph-62cfaa39-e4fc-5ede-b6ae-ee7ea3f2ad3e'})  2026-01-05 01:14:24.739828 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:14:24.739836 | orchestrator | 2026-01-05 01:14:24.739845 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-05 01:14:24.739853 | orchestrator | Monday 05 January 2026 01:14:21 +0000 (0:00:00.150) 0:00:50.165 ******** 2026-01-05 01:14:24.739862 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cc420972-ce44-5a44-a5a6-a707e77471c5', 'data_vg': 'ceph-cc420972-ce44-5a44-a5a6-a707e77471c5'})  2026-01-05 01:14:24.739871 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-62cfaa39-e4fc-5ede-b6ae-ee7ea3f2ad3e', 'data_vg': 'ceph-62cfaa39-e4fc-5ede-b6ae-ee7ea3f2ad3e'})  2026-01-05 01:14:24.739879 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:14:24.739888 | orchestrator | 2026-01-05 01:14:24.739896 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-05 01:14:24.739909 | orchestrator | Monday 05 January 2026 01:14:21 +0000 (0:00:00.173) 0:00:50.339 ******** 2026-01-05 01:14:24.739923 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cc420972-ce44-5a44-a5a6-a707e77471c5', 'data_vg': 'ceph-cc420972-ce44-5a44-a5a6-a707e77471c5'})  2026-01-05 01:14:24.739955 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-62cfaa39-e4fc-5ede-b6ae-ee7ea3f2ad3e', 'data_vg': 'ceph-62cfaa39-e4fc-5ede-b6ae-ee7ea3f2ad3e'})  2026-01-05 01:14:24.739968 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:14:24.739981 | orchestrator | 2026-01-05 01:14:24.739994 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-05 01:14:24.740007 | orchestrator | Monday 05 January 2026 01:14:21 +0000 (0:00:00.169) 0:00:50.508 ******** 2026-01-05 01:14:24.740021 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cc420972-ce44-5a44-a5a6-a707e77471c5', 'data_vg': 'ceph-cc420972-ce44-5a44-a5a6-a707e77471c5'})  2026-01-05 01:14:24.740035 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-62cfaa39-e4fc-5ede-b6ae-ee7ea3f2ad3e', 'data_vg': 'ceph-62cfaa39-e4fc-5ede-b6ae-ee7ea3f2ad3e'})  2026-01-05 01:14:24.740048 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:14:24.740061 | orchestrator | 2026-01-05 01:14:24.740099 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-05 01:14:24.740116 | orchestrator | Monday 05 January 2026 01:14:21 +0000 (0:00:00.167) 0:00:50.675 ******** 2026-01-05 01:14:24.740132 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cc420972-ce44-5a44-a5a6-a707e77471c5', 'data_vg': 'ceph-cc420972-ce44-5a44-a5a6-a707e77471c5'})  2026-01-05 01:14:24.740159 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-62cfaa39-e4fc-5ede-b6ae-ee7ea3f2ad3e', 'data_vg': 'ceph-62cfaa39-e4fc-5ede-b6ae-ee7ea3f2ad3e'})  2026-01-05 01:14:24.740168 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:14:24.740177 | orchestrator | 2026-01-05 01:14:24.740186 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-05 01:14:24.740198 | orchestrator | Monday 05 January 2026 01:14:22 +0000 (0:00:00.384) 0:00:51.060 ******** 2026-01-05 01:14:24.740212 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cc420972-ce44-5a44-a5a6-a707e77471c5', 'data_vg': 'ceph-cc420972-ce44-5a44-a5a6-a707e77471c5'})  2026-01-05 01:14:24.740226 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-62cfaa39-e4fc-5ede-b6ae-ee7ea3f2ad3e', 'data_vg': 'ceph-62cfaa39-e4fc-5ede-b6ae-ee7ea3f2ad3e'})  2026-01-05 01:14:24.740241 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:14:24.740255 | orchestrator | 2026-01-05 01:14:24.740268 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-05 01:14:24.740280 | orchestrator | Monday 05 January 2026 01:14:22 +0000 (0:00:00.171) 0:00:51.232 ******** 2026-01-05 01:14:24.740292 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cc420972-ce44-5a44-a5a6-a707e77471c5', 'data_vg': 'ceph-cc420972-ce44-5a44-a5a6-a707e77471c5'})  2026-01-05 01:14:24.740304 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-62cfaa39-e4fc-5ede-b6ae-ee7ea3f2ad3e', 'data_vg': 'ceph-62cfaa39-e4fc-5ede-b6ae-ee7ea3f2ad3e'})  2026-01-05 01:14:24.740317 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:14:24.740331 | orchestrator | 2026-01-05 01:14:24.740346 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-05 01:14:24.740362 | orchestrator | Monday 05 January 2026 01:14:22 +0000 (0:00:00.180) 0:00:51.412 ******** 2026-01-05 01:14:24.740376 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cc420972-ce44-5a44-a5a6-a707e77471c5', 'data_vg': 'ceph-cc420972-ce44-5a44-a5a6-a707e77471c5'})  2026-01-05 01:14:24.740391 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-62cfaa39-e4fc-5ede-b6ae-ee7ea3f2ad3e', 'data_vg': 'ceph-62cfaa39-e4fc-5ede-b6ae-ee7ea3f2ad3e'})  2026-01-05 01:14:24.740403 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:14:24.740412 | orchestrator | 2026-01-05 01:14:24.740421 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-05 01:14:24.740430 | orchestrator | Monday 05 January 2026 01:14:22 +0000 (0:00:00.158) 0:00:51.571 ******** 2026-01-05 01:14:24.740438 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:14:24.740447 | orchestrator | 2026-01-05 01:14:24.740456 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-05 01:14:24.740465 | orchestrator | Monday 05 January 2026 01:14:23 +0000 (0:00:00.573) 0:00:52.144 ******** 2026-01-05 01:14:24.740473 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:14:24.740482 | orchestrator | 2026-01-05 01:14:24.740490 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-05 01:14:24.740499 | orchestrator | Monday 05 January 2026 01:14:24 +0000 (0:00:00.578) 0:00:52.723 ******** 2026-01-05 01:14:24.740508 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:14:24.740516 | orchestrator | 2026-01-05 01:14:24.740525 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-05 01:14:24.740533 | orchestrator | Monday 05 January 2026 01:14:24 +0000 (0:00:00.171) 0:00:52.894 ******** 2026-01-05 01:14:24.740542 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-62cfaa39-e4fc-5ede-b6ae-ee7ea3f2ad3e', 'vg_name': 'ceph-62cfaa39-e4fc-5ede-b6ae-ee7ea3f2ad3e'}) 2026-01-05 01:14:24.740553 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-cc420972-ce44-5a44-a5a6-a707e77471c5', 'vg_name': 'ceph-cc420972-ce44-5a44-a5a6-a707e77471c5'}) 2026-01-05 01:14:24.740561 | orchestrator | 2026-01-05 01:14:24.740570 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-05 01:14:24.740586 | orchestrator | Monday 05 January 2026 01:14:24 +0000 (0:00:00.211) 0:00:53.106 ******** 2026-01-05 01:14:24.740596 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cc420972-ce44-5a44-a5a6-a707e77471c5', 'data_vg': 'ceph-cc420972-ce44-5a44-a5a6-a707e77471c5'})  2026-01-05 01:14:24.740604 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-62cfaa39-e4fc-5ede-b6ae-ee7ea3f2ad3e', 'data_vg': 'ceph-62cfaa39-e4fc-5ede-b6ae-ee7ea3f2ad3e'})  2026-01-05 01:14:24.740613 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:14:24.740622 | orchestrator | 2026-01-05 01:14:24.740631 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-05 01:14:24.740639 | orchestrator | Monday 05 January 2026 01:14:24 +0000 (0:00:00.158) 0:00:53.264 ******** 2026-01-05 01:14:24.740711 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cc420972-ce44-5a44-a5a6-a707e77471c5', 'data_vg': 'ceph-cc420972-ce44-5a44-a5a6-a707e77471c5'})  2026-01-05 01:14:24.740732 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-62cfaa39-e4fc-5ede-b6ae-ee7ea3f2ad3e', 'data_vg': 'ceph-62cfaa39-e4fc-5ede-b6ae-ee7ea3f2ad3e'})  2026-01-05 01:14:31.447800 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:14:31.447926 | orchestrator | 2026-01-05 01:14:31.447951 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-05 01:14:31.447970 | orchestrator | Monday 05 January 2026 01:14:24 +0000 (0:00:00.171) 0:00:53.436 ******** 2026-01-05 01:14:31.448076 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cc420972-ce44-5a44-a5a6-a707e77471c5', 'data_vg': 'ceph-cc420972-ce44-5a44-a5a6-a707e77471c5'})  2026-01-05 01:14:31.448104 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-62cfaa39-e4fc-5ede-b6ae-ee7ea3f2ad3e', 'data_vg': 'ceph-62cfaa39-e4fc-5ede-b6ae-ee7ea3f2ad3e'})  2026-01-05 01:14:31.448120 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:14:31.448134 | orchestrator | 2026-01-05 01:14:31.448151 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-05 01:14:31.448168 | orchestrator | Monday 05 January 2026 01:14:24 +0000 (0:00:00.171) 0:00:53.608 ******** 2026-01-05 01:14:31.448183 | orchestrator | ok: [testbed-node-4] => { 2026-01-05 01:14:31.448198 | orchestrator |  "lvm_report": { 2026-01-05 01:14:31.448214 | orchestrator |  "lv": [ 2026-01-05 01:14:31.448230 | orchestrator |  { 2026-01-05 01:14:31.448245 | orchestrator |  "lv_name": "osd-block-62cfaa39-e4fc-5ede-b6ae-ee7ea3f2ad3e", 2026-01-05 01:14:31.448260 | orchestrator |  "vg_name": "ceph-62cfaa39-e4fc-5ede-b6ae-ee7ea3f2ad3e" 2026-01-05 01:14:31.448276 | orchestrator |  }, 2026-01-05 01:14:31.448291 | orchestrator |  { 2026-01-05 01:14:31.448307 | orchestrator |  "lv_name": "osd-block-cc420972-ce44-5a44-a5a6-a707e77471c5", 2026-01-05 01:14:31.448323 | orchestrator |  "vg_name": "ceph-cc420972-ce44-5a44-a5a6-a707e77471c5" 2026-01-05 01:14:31.448338 | orchestrator |  } 2026-01-05 01:14:31.448354 | orchestrator |  ], 2026-01-05 01:14:31.448370 | orchestrator |  "pv": [ 2026-01-05 01:14:31.448383 | orchestrator |  { 2026-01-05 01:14:31.448393 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-05 01:14:31.448403 | orchestrator |  "vg_name": "ceph-cc420972-ce44-5a44-a5a6-a707e77471c5" 2026-01-05 01:14:31.448413 | orchestrator |  }, 2026-01-05 01:14:31.448423 | orchestrator |  { 2026-01-05 01:14:31.448434 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-05 01:14:31.448444 | orchestrator |  "vg_name": "ceph-62cfaa39-e4fc-5ede-b6ae-ee7ea3f2ad3e" 2026-01-05 01:14:31.448454 | orchestrator |  } 2026-01-05 01:14:31.448465 | orchestrator |  ] 2026-01-05 01:14:31.448476 | orchestrator |  } 2026-01-05 01:14:31.448486 | orchestrator | } 2026-01-05 01:14:31.448496 | orchestrator | 2026-01-05 01:14:31.448507 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-05 01:14:31.448517 | orchestrator | 2026-01-05 01:14:31.448546 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-05 01:14:31.448555 | orchestrator | Monday 05 January 2026 01:14:25 +0000 (0:00:00.570) 0:00:54.178 ******** 2026-01-05 01:14:31.448563 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-05 01:14:31.448572 | orchestrator | 2026-01-05 01:14:31.448581 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-05 01:14:31.448590 | orchestrator | Monday 05 January 2026 01:14:25 +0000 (0:00:00.270) 0:00:54.449 ******** 2026-01-05 01:14:31.448598 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:14:31.448607 | orchestrator | 2026-01-05 01:14:31.448616 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:14:31.448624 | orchestrator | Monday 05 January 2026 01:14:25 +0000 (0:00:00.251) 0:00:54.700 ******** 2026-01-05 01:14:31.448633 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-01-05 01:14:31.448666 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-01-05 01:14:31.448675 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-01-05 01:14:31.448684 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-01-05 01:14:31.448692 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-01-05 01:14:31.448701 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-01-05 01:14:31.448710 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-01-05 01:14:31.448718 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-01-05 01:14:31.448745 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-01-05 01:14:31.448761 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-01-05 01:14:31.448784 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-01-05 01:14:31.448800 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-01-05 01:14:31.448815 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-01-05 01:14:31.448830 | orchestrator | 2026-01-05 01:14:31.448846 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:14:31.448859 | orchestrator | Monday 05 January 2026 01:14:26 +0000 (0:00:00.451) 0:00:55.152 ******** 2026-01-05 01:14:31.448874 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:14:31.448888 | orchestrator | 2026-01-05 01:14:31.448901 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:14:31.448916 | orchestrator | Monday 05 January 2026 01:14:26 +0000 (0:00:00.211) 0:00:55.363 ******** 2026-01-05 01:14:31.448930 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:14:31.448944 | orchestrator | 2026-01-05 01:14:31.448959 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:14:31.448997 | orchestrator | Monday 05 January 2026 01:14:26 +0000 (0:00:00.214) 0:00:55.577 ******** 2026-01-05 01:14:31.449014 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:14:31.449029 | orchestrator | 2026-01-05 01:14:31.449044 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:14:31.449058 | orchestrator | Monday 05 January 2026 01:14:27 +0000 (0:00:00.210) 0:00:55.787 ******** 2026-01-05 01:14:31.449073 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:14:31.449082 | orchestrator | 2026-01-05 01:14:31.449090 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:14:31.449099 | orchestrator | Monday 05 January 2026 01:14:27 +0000 (0:00:00.209) 0:00:55.997 ******** 2026-01-05 01:14:31.449108 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:14:31.449116 | orchestrator | 2026-01-05 01:14:31.449125 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:14:31.449144 | orchestrator | Monday 05 January 2026 01:14:27 +0000 (0:00:00.213) 0:00:56.210 ******** 2026-01-05 01:14:31.449153 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:14:31.449161 | orchestrator | 2026-01-05 01:14:31.449170 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:14:31.449193 | orchestrator | Monday 05 January 2026 01:14:28 +0000 (0:00:00.680) 0:00:56.891 ******** 2026-01-05 01:14:31.449203 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:14:31.449220 | orchestrator | 2026-01-05 01:14:31.449230 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:14:31.449238 | orchestrator | Monday 05 January 2026 01:14:28 +0000 (0:00:00.216) 0:00:57.107 ******** 2026-01-05 01:14:31.449247 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:14:31.449256 | orchestrator | 2026-01-05 01:14:31.449265 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:14:31.449273 | orchestrator | Monday 05 January 2026 01:14:28 +0000 (0:00:00.212) 0:00:57.320 ******** 2026-01-05 01:14:31.449282 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_994d72d0-f7fa-4ba3-8a27-05b8bd26fa8b) 2026-01-05 01:14:31.449293 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_994d72d0-f7fa-4ba3-8a27-05b8bd26fa8b) 2026-01-05 01:14:31.449301 | orchestrator | 2026-01-05 01:14:31.449310 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:14:31.449319 | orchestrator | Monday 05 January 2026 01:14:29 +0000 (0:00:00.467) 0:00:57.787 ******** 2026-01-05 01:14:31.449327 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_09f09123-b92e-4af4-8119-7d25e215193b) 2026-01-05 01:14:31.449336 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_09f09123-b92e-4af4-8119-7d25e215193b) 2026-01-05 01:14:31.449366 | orchestrator | 2026-01-05 01:14:31.449381 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:14:31.449395 | orchestrator | Monday 05 January 2026 01:14:29 +0000 (0:00:00.507) 0:00:58.295 ******** 2026-01-05 01:14:31.449410 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1d3cc069-e4cd-473c-8ec3-e2e615e111a0) 2026-01-05 01:14:31.449425 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1d3cc069-e4cd-473c-8ec3-e2e615e111a0) 2026-01-05 01:14:31.449439 | orchestrator | 2026-01-05 01:14:31.449454 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:14:31.449469 | orchestrator | Monday 05 January 2026 01:14:30 +0000 (0:00:00.460) 0:00:58.755 ******** 2026-01-05 01:14:31.449484 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_6f88ade1-67f9-419a-b69f-9c70a1e62aa2) 2026-01-05 01:14:31.449499 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_6f88ade1-67f9-419a-b69f-9c70a1e62aa2) 2026-01-05 01:14:31.449514 | orchestrator | 2026-01-05 01:14:31.449529 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-05 01:14:31.449543 | orchestrator | Monday 05 January 2026 01:14:30 +0000 (0:00:00.572) 0:00:59.328 ******** 2026-01-05 01:14:31.449558 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-05 01:14:31.449573 | orchestrator | 2026-01-05 01:14:31.449587 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:14:31.449601 | orchestrator | Monday 05 January 2026 01:14:30 +0000 (0:00:00.355) 0:00:59.683 ******** 2026-01-05 01:14:31.449617 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-01-05 01:14:31.449630 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-01-05 01:14:31.449709 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-01-05 01:14:31.449724 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-01-05 01:14:31.449745 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-01-05 01:14:31.449763 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-01-05 01:14:31.449772 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-01-05 01:14:31.449780 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-01-05 01:14:31.449789 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-01-05 01:14:31.449797 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-01-05 01:14:31.449806 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-01-05 01:14:31.449823 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-01-05 01:14:41.363375 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-01-05 01:14:41.364236 | orchestrator | 2026-01-05 01:14:41.364283 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:14:41.364290 | orchestrator | Monday 05 January 2026 01:14:31 +0000 (0:00:00.459) 0:01:00.142 ******** 2026-01-05 01:14:41.364294 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:14:41.364299 | orchestrator | 2026-01-05 01:14:41.364304 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:14:41.364308 | orchestrator | Monday 05 January 2026 01:14:31 +0000 (0:00:00.199) 0:01:00.342 ******** 2026-01-05 01:14:41.364312 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:14:41.364316 | orchestrator | 2026-01-05 01:14:41.364321 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:14:41.364325 | orchestrator | Monday 05 January 2026 01:14:32 +0000 (0:00:00.751) 0:01:01.093 ******** 2026-01-05 01:14:41.364329 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:14:41.364334 | orchestrator | 2026-01-05 01:14:41.364338 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:14:41.364342 | orchestrator | Monday 05 January 2026 01:14:32 +0000 (0:00:00.254) 0:01:01.348 ******** 2026-01-05 01:14:41.364346 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:14:41.364350 | orchestrator | 2026-01-05 01:14:41.364354 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:14:41.364358 | orchestrator | Monday 05 January 2026 01:14:32 +0000 (0:00:00.214) 0:01:01.562 ******** 2026-01-05 01:14:41.364362 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:14:41.364366 | orchestrator | 2026-01-05 01:14:41.364370 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:14:41.364374 | orchestrator | Monday 05 January 2026 01:14:33 +0000 (0:00:00.215) 0:01:01.777 ******** 2026-01-05 01:14:41.364378 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:14:41.364382 | orchestrator | 2026-01-05 01:14:41.364386 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:14:41.364390 | orchestrator | Monday 05 January 2026 01:14:33 +0000 (0:00:00.224) 0:01:02.001 ******** 2026-01-05 01:14:41.364394 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:14:41.364398 | orchestrator | 2026-01-05 01:14:41.364402 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:14:41.364406 | orchestrator | Monday 05 January 2026 01:14:33 +0000 (0:00:00.211) 0:01:02.213 ******** 2026-01-05 01:14:41.364410 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:14:41.364413 | orchestrator | 2026-01-05 01:14:41.364417 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:14:41.364422 | orchestrator | Monday 05 January 2026 01:14:33 +0000 (0:00:00.224) 0:01:02.438 ******** 2026-01-05 01:14:41.364426 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-01-05 01:14:41.364431 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-01-05 01:14:41.364435 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-01-05 01:14:41.364457 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-01-05 01:14:41.364461 | orchestrator | 2026-01-05 01:14:41.364465 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:14:41.364469 | orchestrator | Monday 05 January 2026 01:14:34 +0000 (0:00:00.701) 0:01:03.139 ******** 2026-01-05 01:14:41.364473 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:14:41.364477 | orchestrator | 2026-01-05 01:14:41.364482 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:14:41.364486 | orchestrator | Monday 05 January 2026 01:14:34 +0000 (0:00:00.236) 0:01:03.376 ******** 2026-01-05 01:14:41.364490 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:14:41.364494 | orchestrator | 2026-01-05 01:14:41.364498 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:14:41.364502 | orchestrator | Monday 05 January 2026 01:14:34 +0000 (0:00:00.254) 0:01:03.630 ******** 2026-01-05 01:14:41.364505 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:14:41.364509 | orchestrator | 2026-01-05 01:14:41.364513 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-05 01:14:41.364517 | orchestrator | Monday 05 January 2026 01:14:35 +0000 (0:00:00.204) 0:01:03.834 ******** 2026-01-05 01:14:41.364521 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:14:41.364525 | orchestrator | 2026-01-05 01:14:41.364529 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-05 01:14:41.364533 | orchestrator | Monday 05 January 2026 01:14:35 +0000 (0:00:00.222) 0:01:04.056 ******** 2026-01-05 01:14:41.364537 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:14:41.364541 | orchestrator | 2026-01-05 01:14:41.364545 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-05 01:14:41.364549 | orchestrator | Monday 05 January 2026 01:14:35 +0000 (0:00:00.390) 0:01:04.447 ******** 2026-01-05 01:14:41.364563 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '13a82a55-1430-5b0a-a1a4-baa9d6ca4414'}}) 2026-01-05 01:14:41.364568 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '124df3d1-788c-586c-b42c-9b6f84a90775'}}) 2026-01-05 01:14:41.364572 | orchestrator | 2026-01-05 01:14:41.364576 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-05 01:14:41.364580 | orchestrator | Monday 05 January 2026 01:14:35 +0000 (0:00:00.207) 0:01:04.654 ******** 2026-01-05 01:14:41.364586 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-13a82a55-1430-5b0a-a1a4-baa9d6ca4414', 'data_vg': 'ceph-13a82a55-1430-5b0a-a1a4-baa9d6ca4414'}) 2026-01-05 01:14:41.364592 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-124df3d1-788c-586c-b42c-9b6f84a90775', 'data_vg': 'ceph-124df3d1-788c-586c-b42c-9b6f84a90775'}) 2026-01-05 01:14:41.364596 | orchestrator | 2026-01-05 01:14:41.364600 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-05 01:14:41.364619 | orchestrator | Monday 05 January 2026 01:14:37 +0000 (0:00:01.997) 0:01:06.652 ******** 2026-01-05 01:14:41.364623 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-13a82a55-1430-5b0a-a1a4-baa9d6ca4414', 'data_vg': 'ceph-13a82a55-1430-5b0a-a1a4-baa9d6ca4414'})  2026-01-05 01:14:41.364663 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-124df3d1-788c-586c-b42c-9b6f84a90775', 'data_vg': 'ceph-124df3d1-788c-586c-b42c-9b6f84a90775'})  2026-01-05 01:14:41.364668 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:14:41.364671 | orchestrator | 2026-01-05 01:14:41.364675 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-05 01:14:41.364679 | orchestrator | Monday 05 January 2026 01:14:38 +0000 (0:00:00.179) 0:01:06.831 ******** 2026-01-05 01:14:41.364683 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-13a82a55-1430-5b0a-a1a4-baa9d6ca4414', 'data_vg': 'ceph-13a82a55-1430-5b0a-a1a4-baa9d6ca4414'}) 2026-01-05 01:14:41.364687 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-124df3d1-788c-586c-b42c-9b6f84a90775', 'data_vg': 'ceph-124df3d1-788c-586c-b42c-9b6f84a90775'}) 2026-01-05 01:14:41.364696 | orchestrator | 2026-01-05 01:14:41.364700 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-05 01:14:41.364703 | orchestrator | Monday 05 January 2026 01:14:39 +0000 (0:00:01.507) 0:01:08.339 ******** 2026-01-05 01:14:41.364707 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-13a82a55-1430-5b0a-a1a4-baa9d6ca4414', 'data_vg': 'ceph-13a82a55-1430-5b0a-a1a4-baa9d6ca4414'})  2026-01-05 01:14:41.364711 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-124df3d1-788c-586c-b42c-9b6f84a90775', 'data_vg': 'ceph-124df3d1-788c-586c-b42c-9b6f84a90775'})  2026-01-05 01:14:41.364715 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:14:41.364719 | orchestrator | 2026-01-05 01:14:41.364723 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-05 01:14:41.364727 | orchestrator | Monday 05 January 2026 01:14:39 +0000 (0:00:00.149) 0:01:08.489 ******** 2026-01-05 01:14:41.364731 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:14:41.364735 | orchestrator | 2026-01-05 01:14:41.364739 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-05 01:14:41.364743 | orchestrator | Monday 05 January 2026 01:14:39 +0000 (0:00:00.209) 0:01:08.698 ******** 2026-01-05 01:14:41.364747 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-13a82a55-1430-5b0a-a1a4-baa9d6ca4414', 'data_vg': 'ceph-13a82a55-1430-5b0a-a1a4-baa9d6ca4414'})  2026-01-05 01:14:41.364751 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-124df3d1-788c-586c-b42c-9b6f84a90775', 'data_vg': 'ceph-124df3d1-788c-586c-b42c-9b6f84a90775'})  2026-01-05 01:14:41.364755 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:14:41.364759 | orchestrator | 2026-01-05 01:14:41.364763 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-05 01:14:41.364767 | orchestrator | Monday 05 January 2026 01:14:40 +0000 (0:00:00.174) 0:01:08.873 ******** 2026-01-05 01:14:41.364771 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:14:41.364775 | orchestrator | 2026-01-05 01:14:41.364779 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-05 01:14:41.364783 | orchestrator | Monday 05 January 2026 01:14:40 +0000 (0:00:00.152) 0:01:09.025 ******** 2026-01-05 01:14:41.364787 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-13a82a55-1430-5b0a-a1a4-baa9d6ca4414', 'data_vg': 'ceph-13a82a55-1430-5b0a-a1a4-baa9d6ca4414'})  2026-01-05 01:14:41.364791 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-124df3d1-788c-586c-b42c-9b6f84a90775', 'data_vg': 'ceph-124df3d1-788c-586c-b42c-9b6f84a90775'})  2026-01-05 01:14:41.364795 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:14:41.364799 | orchestrator | 2026-01-05 01:14:41.364803 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-05 01:14:41.364807 | orchestrator | Monday 05 January 2026 01:14:40 +0000 (0:00:00.165) 0:01:09.190 ******** 2026-01-05 01:14:41.364811 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:14:41.364815 | orchestrator | 2026-01-05 01:14:41.364819 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-05 01:14:41.364822 | orchestrator | Monday 05 January 2026 01:14:40 +0000 (0:00:00.147) 0:01:09.338 ******** 2026-01-05 01:14:41.364829 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-13a82a55-1430-5b0a-a1a4-baa9d6ca4414', 'data_vg': 'ceph-13a82a55-1430-5b0a-a1a4-baa9d6ca4414'})  2026-01-05 01:14:41.364833 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-124df3d1-788c-586c-b42c-9b6f84a90775', 'data_vg': 'ceph-124df3d1-788c-586c-b42c-9b6f84a90775'})  2026-01-05 01:14:41.364837 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:14:41.364841 | orchestrator | 2026-01-05 01:14:41.364845 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-05 01:14:41.364849 | orchestrator | Monday 05 January 2026 01:14:40 +0000 (0:00:00.160) 0:01:09.498 ******** 2026-01-05 01:14:41.364857 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:14:41.364861 | orchestrator | 2026-01-05 01:14:41.364865 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-05 01:14:41.364869 | orchestrator | Monday 05 January 2026 01:14:41 +0000 (0:00:00.397) 0:01:09.896 ******** 2026-01-05 01:14:41.364878 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-13a82a55-1430-5b0a-a1a4-baa9d6ca4414', 'data_vg': 'ceph-13a82a55-1430-5b0a-a1a4-baa9d6ca4414'})  2026-01-05 01:14:48.167682 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-124df3d1-788c-586c-b42c-9b6f84a90775', 'data_vg': 'ceph-124df3d1-788c-586c-b42c-9b6f84a90775'})  2026-01-05 01:14:48.167820 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:14:48.167846 | orchestrator | 2026-01-05 01:14:48.167866 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-05 01:14:48.167886 | orchestrator | Monday 05 January 2026 01:14:41 +0000 (0:00:00.166) 0:01:10.062 ******** 2026-01-05 01:14:48.167904 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-13a82a55-1430-5b0a-a1a4-baa9d6ca4414', 'data_vg': 'ceph-13a82a55-1430-5b0a-a1a4-baa9d6ca4414'})  2026-01-05 01:14:48.167921 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-124df3d1-788c-586c-b42c-9b6f84a90775', 'data_vg': 'ceph-124df3d1-788c-586c-b42c-9b6f84a90775'})  2026-01-05 01:14:48.167938 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:14:48.167955 | orchestrator | 2026-01-05 01:14:48.167973 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-05 01:14:48.167989 | orchestrator | Monday 05 January 2026 01:14:41 +0000 (0:00:00.167) 0:01:10.230 ******** 2026-01-05 01:14:48.168006 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-13a82a55-1430-5b0a-a1a4-baa9d6ca4414', 'data_vg': 'ceph-13a82a55-1430-5b0a-a1a4-baa9d6ca4414'})  2026-01-05 01:14:48.168022 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-124df3d1-788c-586c-b42c-9b6f84a90775', 'data_vg': 'ceph-124df3d1-788c-586c-b42c-9b6f84a90775'})  2026-01-05 01:14:48.168040 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:14:48.168057 | orchestrator | 2026-01-05 01:14:48.168073 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-05 01:14:48.168090 | orchestrator | Monday 05 January 2026 01:14:41 +0000 (0:00:00.164) 0:01:10.395 ******** 2026-01-05 01:14:48.168106 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:14:48.168123 | orchestrator | 2026-01-05 01:14:48.168140 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-05 01:14:48.168157 | orchestrator | Monday 05 January 2026 01:14:41 +0000 (0:00:00.135) 0:01:10.530 ******** 2026-01-05 01:14:48.168174 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:14:48.168192 | orchestrator | 2026-01-05 01:14:48.168210 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-05 01:14:48.168226 | orchestrator | Monday 05 January 2026 01:14:41 +0000 (0:00:00.153) 0:01:10.683 ******** 2026-01-05 01:14:48.168242 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:14:48.168257 | orchestrator | 2026-01-05 01:14:48.168271 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-05 01:14:48.168285 | orchestrator | Monday 05 January 2026 01:14:42 +0000 (0:00:00.152) 0:01:10.836 ******** 2026-01-05 01:14:48.168300 | orchestrator | ok: [testbed-node-5] => { 2026-01-05 01:14:48.168318 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-05 01:14:48.168334 | orchestrator | } 2026-01-05 01:14:48.168351 | orchestrator | 2026-01-05 01:14:48.168368 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-05 01:14:48.168385 | orchestrator | Monday 05 January 2026 01:14:42 +0000 (0:00:00.154) 0:01:10.991 ******** 2026-01-05 01:14:48.168401 | orchestrator | ok: [testbed-node-5] => { 2026-01-05 01:14:48.168418 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-05 01:14:48.168435 | orchestrator | } 2026-01-05 01:14:48.168453 | orchestrator | 2026-01-05 01:14:48.168471 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-05 01:14:48.168520 | orchestrator | Monday 05 January 2026 01:14:42 +0000 (0:00:00.162) 0:01:11.154 ******** 2026-01-05 01:14:48.168536 | orchestrator | ok: [testbed-node-5] => { 2026-01-05 01:14:48.168552 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-05 01:14:48.168569 | orchestrator | } 2026-01-05 01:14:48.168585 | orchestrator | 2026-01-05 01:14:48.168601 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-05 01:14:48.168642 | orchestrator | Monday 05 January 2026 01:14:42 +0000 (0:00:00.148) 0:01:11.302 ******** 2026-01-05 01:14:48.168676 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:14:48.168703 | orchestrator | 2026-01-05 01:14:48.168720 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-05 01:14:48.168735 | orchestrator | Monday 05 January 2026 01:14:43 +0000 (0:00:00.564) 0:01:11.866 ******** 2026-01-05 01:14:48.168753 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:14:48.168769 | orchestrator | 2026-01-05 01:14:48.168785 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-05 01:14:48.168799 | orchestrator | Monday 05 January 2026 01:14:43 +0000 (0:00:00.741) 0:01:12.608 ******** 2026-01-05 01:14:48.168809 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:14:48.168819 | orchestrator | 2026-01-05 01:14:48.168844 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-05 01:14:48.168858 | orchestrator | Monday 05 January 2026 01:14:44 +0000 (0:00:00.802) 0:01:13.410 ******** 2026-01-05 01:14:48.168874 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:14:48.168894 | orchestrator | 2026-01-05 01:14:48.168918 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-05 01:14:48.168933 | orchestrator | Monday 05 January 2026 01:14:44 +0000 (0:00:00.169) 0:01:13.580 ******** 2026-01-05 01:14:48.168950 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:14:48.168966 | orchestrator | 2026-01-05 01:14:48.168982 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-05 01:14:48.168998 | orchestrator | Monday 05 January 2026 01:14:44 +0000 (0:00:00.118) 0:01:13.699 ******** 2026-01-05 01:14:48.169009 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:14:48.169018 | orchestrator | 2026-01-05 01:14:48.169028 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-05 01:14:48.169038 | orchestrator | Monday 05 January 2026 01:14:45 +0000 (0:00:00.123) 0:01:13.822 ******** 2026-01-05 01:14:48.169047 | orchestrator | ok: [testbed-node-5] => { 2026-01-05 01:14:48.169057 | orchestrator |  "vgs_report": { 2026-01-05 01:14:48.169067 | orchestrator |  "vg": [] 2026-01-05 01:14:48.169098 | orchestrator |  } 2026-01-05 01:14:48.169108 | orchestrator | } 2026-01-05 01:14:48.169118 | orchestrator | 2026-01-05 01:14:48.169127 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-05 01:14:48.169137 | orchestrator | Monday 05 January 2026 01:14:45 +0000 (0:00:00.160) 0:01:13.983 ******** 2026-01-05 01:14:48.169147 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:14:48.169157 | orchestrator | 2026-01-05 01:14:48.169166 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-05 01:14:48.169176 | orchestrator | Monday 05 January 2026 01:14:45 +0000 (0:00:00.155) 0:01:14.139 ******** 2026-01-05 01:14:48.169186 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:14:48.169195 | orchestrator | 2026-01-05 01:14:48.169205 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-05 01:14:48.169215 | orchestrator | Monday 05 January 2026 01:14:45 +0000 (0:00:00.158) 0:01:14.297 ******** 2026-01-05 01:14:48.169224 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:14:48.169234 | orchestrator | 2026-01-05 01:14:48.169248 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-05 01:14:48.169272 | orchestrator | Monday 05 January 2026 01:14:45 +0000 (0:00:00.139) 0:01:14.436 ******** 2026-01-05 01:14:48.169290 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:14:48.169305 | orchestrator | 2026-01-05 01:14:48.169320 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-05 01:14:48.169350 | orchestrator | Monday 05 January 2026 01:14:45 +0000 (0:00:00.160) 0:01:14.597 ******** 2026-01-05 01:14:48.169366 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:14:48.169381 | orchestrator | 2026-01-05 01:14:48.169397 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-05 01:14:48.169413 | orchestrator | Monday 05 January 2026 01:14:46 +0000 (0:00:00.153) 0:01:14.750 ******** 2026-01-05 01:14:48.169429 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:14:48.169445 | orchestrator | 2026-01-05 01:14:48.169461 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-05 01:14:48.169476 | orchestrator | Monday 05 January 2026 01:14:46 +0000 (0:00:00.165) 0:01:14.916 ******** 2026-01-05 01:14:48.169492 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:14:48.169509 | orchestrator | 2026-01-05 01:14:48.169526 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-05 01:14:48.169542 | orchestrator | Monday 05 January 2026 01:14:46 +0000 (0:00:00.162) 0:01:15.078 ******** 2026-01-05 01:14:48.169558 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:14:48.169572 | orchestrator | 2026-01-05 01:14:48.169582 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-05 01:14:48.169592 | orchestrator | Monday 05 January 2026 01:14:46 +0000 (0:00:00.376) 0:01:15.454 ******** 2026-01-05 01:14:48.169601 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:14:48.169611 | orchestrator | 2026-01-05 01:14:48.169696 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-05 01:14:48.169716 | orchestrator | Monday 05 January 2026 01:14:46 +0000 (0:00:00.151) 0:01:15.606 ******** 2026-01-05 01:14:48.169732 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:14:48.169748 | orchestrator | 2026-01-05 01:14:48.169763 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-05 01:14:48.169779 | orchestrator | Monday 05 January 2026 01:14:47 +0000 (0:00:00.151) 0:01:15.758 ******** 2026-01-05 01:14:48.169796 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:14:48.169812 | orchestrator | 2026-01-05 01:14:48.169829 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-05 01:14:48.169843 | orchestrator | Monday 05 January 2026 01:14:47 +0000 (0:00:00.144) 0:01:15.902 ******** 2026-01-05 01:14:48.169859 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:14:48.169875 | orchestrator | 2026-01-05 01:14:48.169893 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-05 01:14:48.169909 | orchestrator | Monday 05 January 2026 01:14:47 +0000 (0:00:00.161) 0:01:16.064 ******** 2026-01-05 01:14:48.169926 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:14:48.169938 | orchestrator | 2026-01-05 01:14:48.169947 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-05 01:14:48.169957 | orchestrator | Monday 05 January 2026 01:14:47 +0000 (0:00:00.146) 0:01:16.210 ******** 2026-01-05 01:14:48.169967 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:14:48.169976 | orchestrator | 2026-01-05 01:14:48.169986 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-05 01:14:48.169996 | orchestrator | Monday 05 January 2026 01:14:47 +0000 (0:00:00.138) 0:01:16.349 ******** 2026-01-05 01:14:48.170006 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-13a82a55-1430-5b0a-a1a4-baa9d6ca4414', 'data_vg': 'ceph-13a82a55-1430-5b0a-a1a4-baa9d6ca4414'})  2026-01-05 01:14:48.170094 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-124df3d1-788c-586c-b42c-9b6f84a90775', 'data_vg': 'ceph-124df3d1-788c-586c-b42c-9b6f84a90775'})  2026-01-05 01:14:48.170108 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:14:48.170118 | orchestrator | 2026-01-05 01:14:48.170128 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-05 01:14:48.170137 | orchestrator | Monday 05 January 2026 01:14:47 +0000 (0:00:00.177) 0:01:16.527 ******** 2026-01-05 01:14:48.170147 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-13a82a55-1430-5b0a-a1a4-baa9d6ca4414', 'data_vg': 'ceph-13a82a55-1430-5b0a-a1a4-baa9d6ca4414'})  2026-01-05 01:14:48.170240 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-124df3d1-788c-586c-b42c-9b6f84a90775', 'data_vg': 'ceph-124df3d1-788c-586c-b42c-9b6f84a90775'})  2026-01-05 01:14:48.170253 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:14:48.170263 | orchestrator | 2026-01-05 01:14:48.170273 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-05 01:14:48.170283 | orchestrator | Monday 05 January 2026 01:14:47 +0000 (0:00:00.170) 0:01:16.697 ******** 2026-01-05 01:14:48.170309 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-13a82a55-1430-5b0a-a1a4-baa9d6ca4414', 'data_vg': 'ceph-13a82a55-1430-5b0a-a1a4-baa9d6ca4414'})  2026-01-05 01:14:51.491195 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-124df3d1-788c-586c-b42c-9b6f84a90775', 'data_vg': 'ceph-124df3d1-788c-586c-b42c-9b6f84a90775'})  2026-01-05 01:14:51.492027 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:14:51.492051 | orchestrator | 2026-01-05 01:14:51.492058 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-05 01:14:51.492065 | orchestrator | Monday 05 January 2026 01:14:48 +0000 (0:00:00.169) 0:01:16.867 ******** 2026-01-05 01:14:51.492069 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-13a82a55-1430-5b0a-a1a4-baa9d6ca4414', 'data_vg': 'ceph-13a82a55-1430-5b0a-a1a4-baa9d6ca4414'})  2026-01-05 01:14:51.492074 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-124df3d1-788c-586c-b42c-9b6f84a90775', 'data_vg': 'ceph-124df3d1-788c-586c-b42c-9b6f84a90775'})  2026-01-05 01:14:51.492078 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:14:51.492082 | orchestrator | 2026-01-05 01:14:51.492086 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-05 01:14:51.492090 | orchestrator | Monday 05 January 2026 01:14:48 +0000 (0:00:00.165) 0:01:17.033 ******** 2026-01-05 01:14:51.492093 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-13a82a55-1430-5b0a-a1a4-baa9d6ca4414', 'data_vg': 'ceph-13a82a55-1430-5b0a-a1a4-baa9d6ca4414'})  2026-01-05 01:14:51.492097 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-124df3d1-788c-586c-b42c-9b6f84a90775', 'data_vg': 'ceph-124df3d1-788c-586c-b42c-9b6f84a90775'})  2026-01-05 01:14:51.492101 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:14:51.492105 | orchestrator | 2026-01-05 01:14:51.492109 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-05 01:14:51.492113 | orchestrator | Monday 05 January 2026 01:14:48 +0000 (0:00:00.158) 0:01:17.191 ******** 2026-01-05 01:14:51.492116 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-13a82a55-1430-5b0a-a1a4-baa9d6ca4414', 'data_vg': 'ceph-13a82a55-1430-5b0a-a1a4-baa9d6ca4414'})  2026-01-05 01:14:51.492120 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-124df3d1-788c-586c-b42c-9b6f84a90775', 'data_vg': 'ceph-124df3d1-788c-586c-b42c-9b6f84a90775'})  2026-01-05 01:14:51.492124 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:14:51.492128 | orchestrator | 2026-01-05 01:14:51.492131 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-05 01:14:51.492135 | orchestrator | Monday 05 January 2026 01:14:48 +0000 (0:00:00.410) 0:01:17.601 ******** 2026-01-05 01:14:51.492139 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-13a82a55-1430-5b0a-a1a4-baa9d6ca4414', 'data_vg': 'ceph-13a82a55-1430-5b0a-a1a4-baa9d6ca4414'})  2026-01-05 01:14:51.492146 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-124df3d1-788c-586c-b42c-9b6f84a90775', 'data_vg': 'ceph-124df3d1-788c-586c-b42c-9b6f84a90775'})  2026-01-05 01:14:51.492152 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:14:51.492158 | orchestrator | 2026-01-05 01:14:51.492165 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-05 01:14:51.492171 | orchestrator | Monday 05 January 2026 01:14:49 +0000 (0:00:00.184) 0:01:17.786 ******** 2026-01-05 01:14:51.492200 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-13a82a55-1430-5b0a-a1a4-baa9d6ca4414', 'data_vg': 'ceph-13a82a55-1430-5b0a-a1a4-baa9d6ca4414'})  2026-01-05 01:14:51.492204 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-124df3d1-788c-586c-b42c-9b6f84a90775', 'data_vg': 'ceph-124df3d1-788c-586c-b42c-9b6f84a90775'})  2026-01-05 01:14:51.492208 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:14:51.492212 | orchestrator | 2026-01-05 01:14:51.492215 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-05 01:14:51.492219 | orchestrator | Monday 05 January 2026 01:14:49 +0000 (0:00:00.156) 0:01:17.942 ******** 2026-01-05 01:14:51.492223 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:14:51.492228 | orchestrator | 2026-01-05 01:14:51.492232 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-05 01:14:51.492246 | orchestrator | Monday 05 January 2026 01:14:49 +0000 (0:00:00.576) 0:01:18.518 ******** 2026-01-05 01:14:51.492250 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:14:51.492253 | orchestrator | 2026-01-05 01:14:51.492257 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-05 01:14:51.492261 | orchestrator | Monday 05 January 2026 01:14:50 +0000 (0:00:00.591) 0:01:19.110 ******** 2026-01-05 01:14:51.492264 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:14:51.492268 | orchestrator | 2026-01-05 01:14:51.492272 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-05 01:14:51.492276 | orchestrator | Monday 05 January 2026 01:14:50 +0000 (0:00:00.159) 0:01:19.269 ******** 2026-01-05 01:14:51.492279 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-124df3d1-788c-586c-b42c-9b6f84a90775', 'vg_name': 'ceph-124df3d1-788c-586c-b42c-9b6f84a90775'}) 2026-01-05 01:14:51.492284 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-13a82a55-1430-5b0a-a1a4-baa9d6ca4414', 'vg_name': 'ceph-13a82a55-1430-5b0a-a1a4-baa9d6ca4414'}) 2026-01-05 01:14:51.492288 | orchestrator | 2026-01-05 01:14:51.492292 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-05 01:14:51.492296 | orchestrator | Monday 05 January 2026 01:14:50 +0000 (0:00:00.180) 0:01:19.449 ******** 2026-01-05 01:14:51.492315 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-13a82a55-1430-5b0a-a1a4-baa9d6ca4414', 'data_vg': 'ceph-13a82a55-1430-5b0a-a1a4-baa9d6ca4414'})  2026-01-05 01:14:51.492319 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-124df3d1-788c-586c-b42c-9b6f84a90775', 'data_vg': 'ceph-124df3d1-788c-586c-b42c-9b6f84a90775'})  2026-01-05 01:14:51.492325 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:14:51.492331 | orchestrator | 2026-01-05 01:14:51.492337 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-05 01:14:51.492343 | orchestrator | Monday 05 January 2026 01:14:50 +0000 (0:00:00.209) 0:01:19.659 ******** 2026-01-05 01:14:51.492350 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-13a82a55-1430-5b0a-a1a4-baa9d6ca4414', 'data_vg': 'ceph-13a82a55-1430-5b0a-a1a4-baa9d6ca4414'})  2026-01-05 01:14:51.492356 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-124df3d1-788c-586c-b42c-9b6f84a90775', 'data_vg': 'ceph-124df3d1-788c-586c-b42c-9b6f84a90775'})  2026-01-05 01:14:51.492362 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:14:51.492371 | orchestrator | 2026-01-05 01:14:51.492377 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-05 01:14:51.492382 | orchestrator | Monday 05 January 2026 01:14:51 +0000 (0:00:00.188) 0:01:19.847 ******** 2026-01-05 01:14:51.492392 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-13a82a55-1430-5b0a-a1a4-baa9d6ca4414', 'data_vg': 'ceph-13a82a55-1430-5b0a-a1a4-baa9d6ca4414'})  2026-01-05 01:14:51.492399 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-124df3d1-788c-586c-b42c-9b6f84a90775', 'data_vg': 'ceph-124df3d1-788c-586c-b42c-9b6f84a90775'})  2026-01-05 01:14:51.492413 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:14:51.492417 | orchestrator | 2026-01-05 01:14:51.492421 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-05 01:14:51.492425 | orchestrator | Monday 05 January 2026 01:14:51 +0000 (0:00:00.168) 0:01:20.016 ******** 2026-01-05 01:14:51.492428 | orchestrator | ok: [testbed-node-5] => { 2026-01-05 01:14:51.492432 | orchestrator |  "lvm_report": { 2026-01-05 01:14:51.492436 | orchestrator |  "lv": [ 2026-01-05 01:14:51.492441 | orchestrator |  { 2026-01-05 01:14:51.492447 | orchestrator |  "lv_name": "osd-block-124df3d1-788c-586c-b42c-9b6f84a90775", 2026-01-05 01:14:51.492454 | orchestrator |  "vg_name": "ceph-124df3d1-788c-586c-b42c-9b6f84a90775" 2026-01-05 01:14:51.492461 | orchestrator |  }, 2026-01-05 01:14:51.492467 | orchestrator |  { 2026-01-05 01:14:51.492473 | orchestrator |  "lv_name": "osd-block-13a82a55-1430-5b0a-a1a4-baa9d6ca4414", 2026-01-05 01:14:51.492477 | orchestrator |  "vg_name": "ceph-13a82a55-1430-5b0a-a1a4-baa9d6ca4414" 2026-01-05 01:14:51.492481 | orchestrator |  } 2026-01-05 01:14:51.492485 | orchestrator |  ], 2026-01-05 01:14:51.492489 | orchestrator |  "pv": [ 2026-01-05 01:14:51.492492 | orchestrator |  { 2026-01-05 01:14:51.492496 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-05 01:14:51.492500 | orchestrator |  "vg_name": "ceph-13a82a55-1430-5b0a-a1a4-baa9d6ca4414" 2026-01-05 01:14:51.492504 | orchestrator |  }, 2026-01-05 01:14:51.492507 | orchestrator |  { 2026-01-05 01:14:51.492511 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-05 01:14:51.492515 | orchestrator |  "vg_name": "ceph-124df3d1-788c-586c-b42c-9b6f84a90775" 2026-01-05 01:14:51.492519 | orchestrator |  } 2026-01-05 01:14:51.492523 | orchestrator |  ] 2026-01-05 01:14:51.492527 | orchestrator |  } 2026-01-05 01:14:51.492531 | orchestrator | } 2026-01-05 01:14:51.492535 | orchestrator | 2026-01-05 01:14:51.492538 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:14:51.492542 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-05 01:14:51.492546 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-05 01:14:51.492550 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-05 01:14:51.492554 | orchestrator | 2026-01-05 01:14:51.492558 | orchestrator | 2026-01-05 01:14:51.492561 | orchestrator | 2026-01-05 01:14:51.492568 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:14:51.492572 | orchestrator | Monday 05 January 2026 01:14:51 +0000 (0:00:00.148) 0:01:20.164 ******** 2026-01-05 01:14:51.492576 | orchestrator | =============================================================================== 2026-01-05 01:14:51.492580 | orchestrator | Create block VGs -------------------------------------------------------- 6.00s 2026-01-05 01:14:51.492584 | orchestrator | Create block LVs -------------------------------------------------------- 4.57s 2026-01-05 01:14:51.492587 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 2.03s 2026-01-05 01:14:51.492591 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.91s 2026-01-05 01:14:51.492595 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.90s 2026-01-05 01:14:51.492599 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.74s 2026-01-05 01:14:51.492603 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.71s 2026-01-05 01:14:51.492606 | orchestrator | Add known links to the list of available block devices ------------------ 1.46s 2026-01-05 01:14:51.492649 | orchestrator | Add known partitions to the list of available block devices ------------- 1.37s 2026-01-05 01:14:51.924667 | orchestrator | Print LVM report data --------------------------------------------------- 1.24s 2026-01-05 01:14:51.924749 | orchestrator | Add known links to the list of available block devices ------------------ 0.97s 2026-01-05 01:14:51.924756 | orchestrator | Create WAL LVs for ceph_db_wal_devices ---------------------------------- 0.93s 2026-01-05 01:14:51.924761 | orchestrator | Calculate size needed for WAL LVs on ceph_db_wal_devices ---------------- 0.93s 2026-01-05 01:14:51.924765 | orchestrator | Check whether ceph_db_wal_devices is used exclusively ------------------- 0.93s 2026-01-05 01:14:51.924769 | orchestrator | Add known partitions to the list of available block devices ------------- 0.87s 2026-01-05 01:14:51.924773 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.82s 2026-01-05 01:14:51.924777 | orchestrator | Get initial list of available block devices ----------------------------- 0.81s 2026-01-05 01:14:51.924781 | orchestrator | Print 'Create block VGs' ------------------------------------------------ 0.76s 2026-01-05 01:14:51.924785 | orchestrator | Print 'Create WAL LVs for ceph_db_wal_devices' -------------------------- 0.75s 2026-01-05 01:14:51.924789 | orchestrator | Add known partitions to the list of available block devices ------------- 0.75s 2026-01-05 01:15:04.477218 | orchestrator | 2026-01-05 01:15:04 | INFO  | Task cd78c33e-aa4b-41dd-94db-eaa467c0572f (facts) was prepared for execution. 2026-01-05 01:15:04.477309 | orchestrator | 2026-01-05 01:15:04 | INFO  | It takes a moment until task cd78c33e-aa4b-41dd-94db-eaa467c0572f (facts) has been started and output is visible here. 2026-01-05 01:15:18.334316 | orchestrator | 2026-01-05 01:15:18.334399 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-01-05 01:15:18.334406 | orchestrator | 2026-01-05 01:15:18.334411 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-05 01:15:18.334416 | orchestrator | Monday 05 January 2026 01:15:08 +0000 (0:00:00.296) 0:00:00.296 ******** 2026-01-05 01:15:18.334420 | orchestrator | ok: [testbed-manager] 2026-01-05 01:15:18.334425 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:15:18.334430 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:15:18.334434 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:15:18.334438 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:15:18.334441 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:15:18.334445 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:15:18.334449 | orchestrator | 2026-01-05 01:15:18.334453 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-05 01:15:18.334457 | orchestrator | Monday 05 January 2026 01:15:10 +0000 (0:00:01.247) 0:00:01.544 ******** 2026-01-05 01:15:18.334461 | orchestrator | skipping: [testbed-manager] 2026-01-05 01:15:18.334465 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:15:18.334469 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:15:18.334473 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:15:18.334476 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:15:18.334480 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:15:18.334484 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:15:18.334488 | orchestrator | 2026-01-05 01:15:18.334491 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-05 01:15:18.334495 | orchestrator | 2026-01-05 01:15:18.334499 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-05 01:15:18.334503 | orchestrator | Monday 05 January 2026 01:15:11 +0000 (0:00:01.391) 0:00:02.935 ******** 2026-01-05 01:15:18.334506 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:15:18.334510 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:15:18.334514 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:15:18.334517 | orchestrator | ok: [testbed-manager] 2026-01-05 01:15:18.334521 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:15:18.334525 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:15:18.334529 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:15:18.334533 | orchestrator | 2026-01-05 01:15:18.334537 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-05 01:15:18.334559 | orchestrator | 2026-01-05 01:15:18.334563 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-05 01:15:18.334566 | orchestrator | Monday 05 January 2026 01:15:17 +0000 (0:00:05.674) 0:00:08.610 ******** 2026-01-05 01:15:18.334570 | orchestrator | skipping: [testbed-manager] 2026-01-05 01:15:18.334574 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:15:18.334600 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:15:18.334604 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:15:18.334608 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:15:18.334611 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:15:18.334615 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:15:18.334619 | orchestrator | 2026-01-05 01:15:18.334622 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:15:18.334638 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 01:15:18.334643 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 01:15:18.334647 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 01:15:18.334650 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 01:15:18.334654 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 01:15:18.334658 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 01:15:18.334662 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 01:15:18.334665 | orchestrator | 2026-01-05 01:15:18.334669 | orchestrator | 2026-01-05 01:15:18.334673 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:15:18.334676 | orchestrator | Monday 05 January 2026 01:15:17 +0000 (0:00:00.572) 0:00:09.182 ******** 2026-01-05 01:15:18.334680 | orchestrator | =============================================================================== 2026-01-05 01:15:18.334684 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.67s 2026-01-05 01:15:18.334688 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.39s 2026-01-05 01:15:18.334692 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.25s 2026-01-05 01:15:18.334695 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.57s 2026-01-05 01:15:20.815405 | orchestrator | 2026-01-05 01:15:20 | INFO  | Task e6b2725f-c54f-44d0-aef1-988131868a2e (ceph) was prepared for execution. 2026-01-05 01:15:20.816416 | orchestrator | 2026-01-05 01:15:20 | INFO  | It takes a moment until task e6b2725f-c54f-44d0-aef1-988131868a2e (ceph) has been started and output is visible here. 2026-01-05 01:15:39.761002 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-05 01:15:39.761112 | orchestrator | 2.16.14 2026-01-05 01:15:39.761128 | orchestrator | 2026-01-05 01:15:39.761139 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-01-05 01:15:39.761153 | orchestrator | 2026-01-05 01:15:39.761169 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-05 01:15:39.761184 | orchestrator | Monday 05 January 2026 01:15:26 +0000 (0:00:00.861) 0:00:00.861 ******** 2026-01-05 01:15:39.761201 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:15:39.761243 | orchestrator | 2026-01-05 01:15:39.761258 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-01-05 01:15:39.761268 | orchestrator | Monday 05 January 2026 01:15:27 +0000 (0:00:01.407) 0:00:02.269 ******** 2026-01-05 01:15:39.761277 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:15:39.761286 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:15:39.761295 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:15:39.761304 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:15:39.761312 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:15:39.761321 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:15:39.761329 | orchestrator | 2026-01-05 01:15:39.761338 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-01-05 01:15:39.761347 | orchestrator | Monday 05 January 2026 01:15:28 +0000 (0:00:01.298) 0:00:03.568 ******** 2026-01-05 01:15:39.761356 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:15:39.761364 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:15:39.761373 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:15:39.761381 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:15:39.761390 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:15:39.761398 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:15:39.761407 | orchestrator | 2026-01-05 01:15:39.761416 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-05 01:15:39.761424 | orchestrator | Monday 05 January 2026 01:15:29 +0000 (0:00:00.826) 0:00:04.394 ******** 2026-01-05 01:15:39.761433 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:15:39.761441 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:15:39.761450 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:15:39.761458 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:15:39.761467 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:15:39.761475 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:15:39.761484 | orchestrator | 2026-01-05 01:15:39.761493 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-05 01:15:39.761502 | orchestrator | Monday 05 January 2026 01:15:30 +0000 (0:00:01.022) 0:00:05.416 ******** 2026-01-05 01:15:39.761510 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:15:39.761522 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:15:39.761532 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:15:39.761542 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:15:39.761572 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:15:39.761583 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:15:39.761594 | orchestrator | 2026-01-05 01:15:39.761604 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-01-05 01:15:39.761615 | orchestrator | Monday 05 January 2026 01:15:31 +0000 (0:00:00.878) 0:00:06.294 ******** 2026-01-05 01:15:39.761625 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:15:39.761635 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:15:39.761660 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:15:39.761672 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:15:39.761688 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:15:39.761710 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:15:39.761726 | orchestrator | 2026-01-05 01:15:39.761741 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-01-05 01:15:39.761755 | orchestrator | Monday 05 January 2026 01:15:32 +0000 (0:00:00.632) 0:00:06.927 ******** 2026-01-05 01:15:39.761769 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:15:39.761783 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:15:39.761799 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:15:39.761813 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:15:39.761828 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:15:39.761844 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:15:39.761859 | orchestrator | 2026-01-05 01:15:39.761875 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-01-05 01:15:39.761889 | orchestrator | Monday 05 January 2026 01:15:33 +0000 (0:00:00.892) 0:00:07.819 ******** 2026-01-05 01:15:39.761898 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:15:39.761908 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:15:39.761926 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:15:39.761935 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:15:39.761944 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:15:39.761952 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:15:39.761961 | orchestrator | 2026-01-05 01:15:39.761970 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-01-05 01:15:39.761990 | orchestrator | Monday 05 January 2026 01:15:33 +0000 (0:00:00.679) 0:00:08.499 ******** 2026-01-05 01:15:39.761999 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:15:39.762008 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:15:39.762066 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:15:39.762076 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:15:39.762085 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:15:39.762093 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:15:39.762102 | orchestrator | 2026-01-05 01:15:39.762111 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-01-05 01:15:39.762120 | orchestrator | Monday 05 January 2026 01:15:34 +0000 (0:00:00.827) 0:00:09.326 ******** 2026-01-05 01:15:39.762129 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-05 01:15:39.762138 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-05 01:15:39.762147 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-05 01:15:39.762155 | orchestrator | 2026-01-05 01:15:39.762164 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-01-05 01:15:39.762173 | orchestrator | Monday 05 January 2026 01:15:35 +0000 (0:00:00.653) 0:00:09.980 ******** 2026-01-05 01:15:39.762181 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:15:39.762190 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:15:39.762199 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:15:39.762228 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:15:39.762237 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:15:39.762245 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:15:39.762254 | orchestrator | 2026-01-05 01:15:39.762263 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-01-05 01:15:39.762272 | orchestrator | Monday 05 January 2026 01:15:36 +0000 (0:00:01.025) 0:00:11.005 ******** 2026-01-05 01:15:39.762281 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-05 01:15:39.762289 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-05 01:15:39.762298 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-05 01:15:39.762307 | orchestrator | 2026-01-05 01:15:39.762315 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-01-05 01:15:39.762324 | orchestrator | Monday 05 January 2026 01:15:38 +0000 (0:00:02.069) 0:00:13.075 ******** 2026-01-05 01:15:39.762332 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-05 01:15:39.762342 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-05 01:15:39.762351 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-05 01:15:39.762359 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:15:39.762368 | orchestrator | 2026-01-05 01:15:39.762377 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-01-05 01:15:39.762386 | orchestrator | Monday 05 January 2026 01:15:38 +0000 (0:00:00.439) 0:00:13.515 ******** 2026-01-05 01:15:39.762397 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-05 01:15:39.762409 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-05 01:15:39.762424 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-05 01:15:39.762433 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:15:39.762442 | orchestrator | 2026-01-05 01:15:39.762450 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-01-05 01:15:39.762459 | orchestrator | Monday 05 January 2026 01:15:39 +0000 (0:00:00.668) 0:00:14.183 ******** 2026-01-05 01:15:39.762483 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:39.762495 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:39.762504 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:39.762513 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:15:39.762522 | orchestrator | 2026-01-05 01:15:39.762531 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-01-05 01:15:39.762539 | orchestrator | Monday 05 January 2026 01:15:39 +0000 (0:00:00.190) 0:00:14.374 ******** 2026-01-05 01:15:39.762580 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-01-05 01:15:36.874994', 'end': '2026-01-05 01:15:36.923789', 'delta': '0:00:00.048795', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-05 01:15:49.641659 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-01-05 01:15:37.512462', 'end': '2026-01-05 01:15:37.562504', 'delta': '0:00:00.050042', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-05 01:15:49.641755 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-01-05 01:15:38.064686', 'end': '2026-01-05 01:15:38.119441', 'delta': '0:00:00.054755', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-05 01:15:49.641790 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:15:49.641802 | orchestrator | 2026-01-05 01:15:49.641812 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-01-05 01:15:49.641823 | orchestrator | Monday 05 January 2026 01:15:39 +0000 (0:00:00.200) 0:00:14.574 ******** 2026-01-05 01:15:49.641832 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:15:49.641843 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:15:49.641852 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:15:49.641861 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:15:49.641870 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:15:49.641881 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:15:49.641887 | orchestrator | 2026-01-05 01:15:49.641893 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-01-05 01:15:49.641899 | orchestrator | Monday 05 January 2026 01:15:40 +0000 (0:00:01.213) 0:00:15.787 ******** 2026-01-05 01:15:49.641918 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-05 01:15:49.641924 | orchestrator | 2026-01-05 01:15:49.641929 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-01-05 01:15:49.641935 | orchestrator | Monday 05 January 2026 01:15:41 +0000 (0:00:00.657) 0:00:16.445 ******** 2026-01-05 01:15:49.641941 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:15:49.641949 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:15:49.641958 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:15:49.641973 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:15:49.641984 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:15:49.641992 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:15:49.641999 | orchestrator | 2026-01-05 01:15:49.642007 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-01-05 01:15:49.642092 | orchestrator | Monday 05 January 2026 01:15:42 +0000 (0:00:00.622) 0:00:17.067 ******** 2026-01-05 01:15:49.642104 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:15:49.642112 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:15:49.642120 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:15:49.642129 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:15:49.642138 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:15:49.642147 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:15:49.642156 | orchestrator | 2026-01-05 01:15:49.642166 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-05 01:15:49.642174 | orchestrator | Monday 05 January 2026 01:15:43 +0000 (0:00:01.028) 0:00:18.096 ******** 2026-01-05 01:15:49.642181 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:15:49.642188 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:15:49.642194 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:15:49.642200 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:15:49.642207 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:15:49.642213 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:15:49.642219 | orchestrator | 2026-01-05 01:15:49.642226 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-01-05 01:15:49.642232 | orchestrator | Monday 05 January 2026 01:15:43 +0000 (0:00:00.650) 0:00:18.746 ******** 2026-01-05 01:15:49.642238 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:15:49.642244 | orchestrator | 2026-01-05 01:15:49.642250 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-01-05 01:15:49.642257 | orchestrator | Monday 05 January 2026 01:15:44 +0000 (0:00:00.125) 0:00:18.871 ******** 2026-01-05 01:15:49.642263 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:15:49.642278 | orchestrator | 2026-01-05 01:15:49.642285 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-05 01:15:49.642291 | orchestrator | Monday 05 January 2026 01:15:44 +0000 (0:00:00.244) 0:00:19.116 ******** 2026-01-05 01:15:49.642298 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:15:49.642304 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:15:49.642310 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:15:49.642317 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:15:49.642324 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:15:49.642330 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:15:49.642336 | orchestrator | 2026-01-05 01:15:49.642359 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-01-05 01:15:49.642366 | orchestrator | Monday 05 January 2026 01:15:45 +0000 (0:00:00.816) 0:00:19.932 ******** 2026-01-05 01:15:49.642372 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:15:49.642379 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:15:49.642385 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:15:49.642391 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:15:49.642398 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:15:49.642404 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:15:49.642411 | orchestrator | 2026-01-05 01:15:49.642417 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-01-05 01:15:49.642424 | orchestrator | Monday 05 January 2026 01:15:45 +0000 (0:00:00.643) 0:00:20.576 ******** 2026-01-05 01:15:49.642430 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:15:49.642436 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:15:49.642442 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:15:49.642448 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:15:49.642457 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:15:49.642466 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:15:49.642475 | orchestrator | 2026-01-05 01:15:49.642489 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-01-05 01:15:49.642500 | orchestrator | Monday 05 January 2026 01:15:46 +0000 (0:00:00.814) 0:00:21.391 ******** 2026-01-05 01:15:49.642508 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:15:49.642516 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:15:49.642524 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:15:49.642533 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:15:49.642563 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:15:49.642572 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:15:49.642580 | orchestrator | 2026-01-05 01:15:49.642588 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-01-05 01:15:49.642597 | orchestrator | Monday 05 January 2026 01:15:47 +0000 (0:00:00.581) 0:00:21.973 ******** 2026-01-05 01:15:49.642605 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:15:49.642615 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:15:49.642624 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:15:49.642632 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:15:49.642641 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:15:49.642650 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:15:49.642655 | orchestrator | 2026-01-05 01:15:49.642661 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-01-05 01:15:49.642666 | orchestrator | Monday 05 January 2026 01:15:47 +0000 (0:00:00.828) 0:00:22.801 ******** 2026-01-05 01:15:49.642672 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:15:49.642677 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:15:49.642682 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:15:49.642687 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:15:49.642693 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:15:49.642698 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:15:49.642703 | orchestrator | 2026-01-05 01:15:49.642709 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-01-05 01:15:49.642722 | orchestrator | Monday 05 January 2026 01:15:48 +0000 (0:00:00.805) 0:00:23.607 ******** 2026-01-05 01:15:49.642734 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:15:49.642739 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:15:49.642744 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:15:49.642750 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:15:49.642755 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:15:49.642760 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:15:49.642765 | orchestrator | 2026-01-05 01:15:49.642771 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-01-05 01:15:49.642776 | orchestrator | Monday 05 January 2026 01:15:49 +0000 (0:00:00.640) 0:00:24.247 ******** 2026-01-05 01:15:49.642784 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9b63b326--8bb9--546b--aabb--a628fef076ec-osd--block--9b63b326--8bb9--546b--aabb--a628fef076ec', 'dm-uuid-LVM-aax8Lv27NCCQjPi1qio1vJTPmq4Z2c3GNKnBnMAGF0tJqvI6sSs3evnn2KUDak0C'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-05 01:15:49.642792 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b6ae7fca--c2f2--5e20--af6f--426bd4b4cc4c-osd--block--b6ae7fca--c2f2--5e20--af6f--426bd4b4cc4c', 'dm-uuid-LVM-YlhFCSURoBNX3OX3YiXj0O0Zc8T7SdkQ6cGUFgLwbPE7lg60PLZeAg8gCNHqABZF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-05 01:15:49.642807 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:15:49.717750 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:15:49.717846 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:15:49.717857 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:15:49.717866 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:15:49.717914 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:15:49.717924 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:15:49.717931 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:15:49.718008 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b0d1c85-8aad-4201-aadd-214ecf9ccf0b', 'scsi-SQEMU_QEMU_HARDDISK_0b0d1c85-8aad-4201-aadd-214ecf9ccf0b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b0d1c85-8aad-4201-aadd-214ecf9ccf0b-part1', 'scsi-SQEMU_QEMU_HARDDISK_0b0d1c85-8aad-4201-aadd-214ecf9ccf0b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b0d1c85-8aad-4201-aadd-214ecf9ccf0b-part14', 'scsi-SQEMU_QEMU_HARDDISK_0b0d1c85-8aad-4201-aadd-214ecf9ccf0b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b0d1c85-8aad-4201-aadd-214ecf9ccf0b-part15', 'scsi-SQEMU_QEMU_HARDDISK_0b0d1c85-8aad-4201-aadd-214ecf9ccf0b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b0d1c85-8aad-4201-aadd-214ecf9ccf0b-part16', 'scsi-SQEMU_QEMU_HARDDISK_0b0d1c85-8aad-4201-aadd-214ecf9ccf0b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:15:49.718069 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--9b63b326--8bb9--546b--aabb--a628fef076ec-osd--block--9b63b326--8bb9--546b--aabb--a628fef076ec'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3CXOmo-fjPi-sB7K-8cPd-5gdl-1Eim-RcVjLf', 'scsi-0QEMU_QEMU_HARDDISK_bcde85c0-b124-4268-b34b-cc4a07cfe72d', 'scsi-SQEMU_QEMU_HARDDISK_bcde85c0-b124-4268-b34b-cc4a07cfe72d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:15:49.718096 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b6ae7fca--c2f2--5e20--af6f--426bd4b4cc4c-osd--block--b6ae7fca--c2f2--5e20--af6f--426bd4b4cc4c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-NUQqtM-WhHN-H4hZ-NI7i-o5vB-47E9-oLkBA6', 'scsi-0QEMU_QEMU_HARDDISK_99050707-7ba3-43f8-b640-7ac26fbd844b', 'scsi-SQEMU_QEMU_HARDDISK_99050707-7ba3-43f8-b640-7ac26fbd844b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:15:49.718105 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--cc420972--ce44--5a44--a5a6--a707e77471c5-osd--block--cc420972--ce44--5a44--a5a6--a707e77471c5', 'dm-uuid-LVM-5y6etqdOybvwL8SKpqd9lO6ea8AihF6ogUllt99DApL2987EmbvRNCTuGEj3rZSj'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-05 01:15:49.718115 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ca851d29-aa00-48c4-a2d0-a646814f4a41', 'scsi-SQEMU_QEMU_HARDDISK_ca851d29-aa00-48c4-a2d0-a646814f4a41'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:15:49.718130 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--62cfaa39--e4fc--5ede--b6ae--ee7ea3f2ad3e-osd--block--62cfaa39--e4fc--5ede--b6ae--ee7ea3f2ad3e', 'dm-uuid-LVM-XtuO3kY5fz70u4PVT7kRjLID7YPzDdlaHpmcICK37hKMr5v7VPxurWIVpPi7MnTe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-05 01:15:49.852004 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-03-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:15:49.852102 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:15:49.852141 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:15:49.852167 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:15:49.852177 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:15:49.852186 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:15:49.852195 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:15:49.852204 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:15:49.852213 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:15:49.852223 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:15:49.852258 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3250b0f8-cf47-4b18-9931-22a1ebe34c49', 'scsi-SQEMU_QEMU_HARDDISK_3250b0f8-cf47-4b18-9931-22a1ebe34c49'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3250b0f8-cf47-4b18-9931-22a1ebe34c49-part1', 'scsi-SQEMU_QEMU_HARDDISK_3250b0f8-cf47-4b18-9931-22a1ebe34c49-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3250b0f8-cf47-4b18-9931-22a1ebe34c49-part14', 'scsi-SQEMU_QEMU_HARDDISK_3250b0f8-cf47-4b18-9931-22a1ebe34c49-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3250b0f8-cf47-4b18-9931-22a1ebe34c49-part15', 'scsi-SQEMU_QEMU_HARDDISK_3250b0f8-cf47-4b18-9931-22a1ebe34c49-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3250b0f8-cf47-4b18-9931-22a1ebe34c49-part16', 'scsi-SQEMU_QEMU_HARDDISK_3250b0f8-cf47-4b18-9931-22a1ebe34c49-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:15:49.852275 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--cc420972--ce44--5a44--a5a6--a707e77471c5-osd--block--cc420972--ce44--5a44--a5a6--a707e77471c5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-NROBWj-Y4k8-CNdZ-qqy0-C4t6-f7oi-swyzyZ', 'scsi-0QEMU_QEMU_HARDDISK_9f2df327-5b12-4442-ac27-592210953f70', 'scsi-SQEMU_QEMU_HARDDISK_9f2df327-5b12-4442-ac27-592210953f70'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:15:49.852287 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--62cfaa39--e4fc--5ede--b6ae--ee7ea3f2ad3e-osd--block--62cfaa39--e4fc--5ede--b6ae--ee7ea3f2ad3e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-humsa4-KTgB-CbrW-Qmdc-6Kz2-zs64-ZjfIZb', 'scsi-0QEMU_QEMU_HARDDISK_ead21d4d-eccd-4cd4-b0bf-ce9a2f7ae522', 'scsi-SQEMU_QEMU_HARDDISK_ead21d4d-eccd-4cd4-b0bf-ce9a2f7ae522'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:15:49.852302 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e0b145f-2bfd-4824-bc37-4d4082c6f3f3', 'scsi-SQEMU_QEMU_HARDDISK_6e0b145f-2bfd-4824-bc37-4d4082c6f3f3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:15:50.026910 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-03-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:15:50.027020 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--13a82a55--1430--5b0a--a1a4--baa9d6ca4414-osd--block--13a82a55--1430--5b0a--a1a4--baa9d6ca4414', 'dm-uuid-LVM-N6IGAFcTK4f0RIoIL68bIa5oeOtjeq5VPt3zysJ6uusfwuUnDTnTWFIlh4KrifZL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-05 01:15:50.027044 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--124df3d1--788c--586c--b42c--9b6f84a90775-osd--block--124df3d1--788c--586c--b42c--9b6f84a90775', 'dm-uuid-LVM-esS8nIABj2XOT7SZaVlhCHBSO01PfHEXG2YstjcMIJDQ3Sk02xtXf1d3vB4hoV1I'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-05 01:15:50.027053 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:15:50.027062 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:15:50.027068 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:15:50.027075 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:15:50.027082 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:15:50.027103 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:15:50.027115 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:15:50.027122 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:15:50.027135 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_994d72d0-f7fa-4ba3-8a27-05b8bd26fa8b', 'scsi-SQEMU_QEMU_HARDDISK_994d72d0-f7fa-4ba3-8a27-05b8bd26fa8b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_994d72d0-f7fa-4ba3-8a27-05b8bd26fa8b-part1', 'scsi-SQEMU_QEMU_HARDDISK_994d72d0-f7fa-4ba3-8a27-05b8bd26fa8b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_994d72d0-f7fa-4ba3-8a27-05b8bd26fa8b-part14', 'scsi-SQEMU_QEMU_HARDDISK_994d72d0-f7fa-4ba3-8a27-05b8bd26fa8b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_994d72d0-f7fa-4ba3-8a27-05b8bd26fa8b-part15', 'scsi-SQEMU_QEMU_HARDDISK_994d72d0-f7fa-4ba3-8a27-05b8bd26fa8b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_994d72d0-f7fa-4ba3-8a27-05b8bd26fa8b-part16', 'scsi-SQEMU_QEMU_HARDDISK_994d72d0-f7fa-4ba3-8a27-05b8bd26fa8b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:15:50.027148 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:15:50.027162 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--13a82a55--1430--5b0a--a1a4--baa9d6ca4414-osd--block--13a82a55--1430--5b0a--a1a4--baa9d6ca4414'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WJQI6q-2zyD-20jq-Oozc-DwSo-tfBz-Pdgnig', 'scsi-0QEMU_QEMU_HARDDISK_09f09123-b92e-4af4-8119-7d25e215193b', 'scsi-SQEMU_QEMU_HARDDISK_09f09123-b92e-4af4-8119-7d25e215193b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:15:50.027187 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--124df3d1--788c--586c--b42c--9b6f84a90775-osd--block--124df3d1--788c--586c--b42c--9b6f84a90775'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-sOK4iT-Id2D-YriF-inFH-U9JU-3Vhu-dR9bqV', 'scsi-0QEMU_QEMU_HARDDISK_1d3cc069-e4cd-473c-8ec3-e2e615e111a0', 'scsi-SQEMU_QEMU_HARDDISK_1d3cc069-e4cd-473c-8ec3-e2e615e111a0'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:15:50.477289 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f88ade1-67f9-419a-b69f-9c70a1e62aa2', 'scsi-SQEMU_QEMU_HARDDISK_6f88ade1-67f9-419a-b69f-9c70a1e62aa2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:15:50.477408 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-03-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:15:50.477420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:15:50.477428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:15:50.477435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:15:50.477441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:15:50.477448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:15:50.477473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:15:50.477496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:15:50.477502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:15:50.477514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78340717-3c88-4c84-83f5-f931035dba88', 'scsi-SQEMU_QEMU_HARDDISK_78340717-3c88-4c84-83f5-f931035dba88'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78340717-3c88-4c84-83f5-f931035dba88-part1', 'scsi-SQEMU_QEMU_HARDDISK_78340717-3c88-4c84-83f5-f931035dba88-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78340717-3c88-4c84-83f5-f931035dba88-part14', 'scsi-SQEMU_QEMU_HARDDISK_78340717-3c88-4c84-83f5-f931035dba88-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78340717-3c88-4c84-83f5-f931035dba88-part15', 'scsi-SQEMU_QEMU_HARDDISK_78340717-3c88-4c84-83f5-f931035dba88-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78340717-3c88-4c84-83f5-f931035dba88-part16', 'scsi-SQEMU_QEMU_HARDDISK_78340717-3c88-4c84-83f5-f931035dba88-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:15:50.477522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-03-20-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:15:50.477618 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:15:50.477626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:15:50.477633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:15:50.477644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:15:50.752296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:15:50.752393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:15:50.752401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:15:50.752406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:15:50.752411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:15:50.752433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d306408f-4477-48ac-bb84-69da99054fdf', 'scsi-SQEMU_QEMU_HARDDISK_d306408f-4477-48ac-bb84-69da99054fdf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d306408f-4477-48ac-bb84-69da99054fdf-part1', 'scsi-SQEMU_QEMU_HARDDISK_d306408f-4477-48ac-bb84-69da99054fdf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d306408f-4477-48ac-bb84-69da99054fdf-part14', 'scsi-SQEMU_QEMU_HARDDISK_d306408f-4477-48ac-bb84-69da99054fdf-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d306408f-4477-48ac-bb84-69da99054fdf-part15', 'scsi-SQEMU_QEMU_HARDDISK_d306408f-4477-48ac-bb84-69da99054fdf-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d306408f-4477-48ac-bb84-69da99054fdf-part16', 'scsi-SQEMU_QEMU_HARDDISK_d306408f-4477-48ac-bb84-69da99054fdf-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:15:50.752461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-03-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:15:50.752469 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:15:50.752475 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:15:50.752480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:15:50.752485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:15:50.752490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:15:50.752495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:15:50.752504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:15:50.752509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:15:50.752514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:15:50.752525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:15:50.998414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7efae86a-8b4f-4401-bee7-529c12412766', 'scsi-SQEMU_QEMU_HARDDISK_7efae86a-8b4f-4401-bee7-529c12412766'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7efae86a-8b4f-4401-bee7-529c12412766-part1', 'scsi-SQEMU_QEMU_HARDDISK_7efae86a-8b4f-4401-bee7-529c12412766-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7efae86a-8b4f-4401-bee7-529c12412766-part14', 'scsi-SQEMU_QEMU_HARDDISK_7efae86a-8b4f-4401-bee7-529c12412766-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7efae86a-8b4f-4401-bee7-529c12412766-part15', 'scsi-SQEMU_QEMU_HARDDISK_7efae86a-8b4f-4401-bee7-529c12412766-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7efae86a-8b4f-4401-bee7-529c12412766-part16', 'scsi-SQEMU_QEMU_HARDDISK_7efae86a-8b4f-4401-bee7-529c12412766-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:15:50.998657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-03-22-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:15:50.998683 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:15:50.998694 | orchestrator | 2026-01-05 01:15:50.998702 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-01-05 01:15:50.998710 | orchestrator | Monday 05 January 2026 01:15:50 +0000 (0:00:01.310) 0:00:25.557 ******** 2026-01-05 01:15:50.998718 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9b63b326--8bb9--546b--aabb--a628fef076ec-osd--block--9b63b326--8bb9--546b--aabb--a628fef076ec', 'dm-uuid-LVM-aax8Lv27NCCQjPi1qio1vJTPmq4Z2c3GNKnBnMAGF0tJqvI6sSs3evnn2KUDak0C'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:50.998746 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b6ae7fca--c2f2--5e20--af6f--426bd4b4cc4c-osd--block--b6ae7fca--c2f2--5e20--af6f--426bd4b4cc4c', 'dm-uuid-LVM-YlhFCSURoBNX3OX3YiXj0O0Zc8T7SdkQ6cGUFgLwbPE7lg60PLZeAg8gCNHqABZF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:50.998765 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:50.998777 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:50.998788 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:50.998808 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:50.998818 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:50.998829 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:50.998846 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.045470 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.045593 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b0d1c85-8aad-4201-aadd-214ecf9ccf0b', 'scsi-SQEMU_QEMU_HARDDISK_0b0d1c85-8aad-4201-aadd-214ecf9ccf0b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b0d1c85-8aad-4201-aadd-214ecf9ccf0b-part1', 'scsi-SQEMU_QEMU_HARDDISK_0b0d1c85-8aad-4201-aadd-214ecf9ccf0b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b0d1c85-8aad-4201-aadd-214ecf9ccf0b-part14', 'scsi-SQEMU_QEMU_HARDDISK_0b0d1c85-8aad-4201-aadd-214ecf9ccf0b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b0d1c85-8aad-4201-aadd-214ecf9ccf0b-part15', 'scsi-SQEMU_QEMU_HARDDISK_0b0d1c85-8aad-4201-aadd-214ecf9ccf0b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b0d1c85-8aad-4201-aadd-214ecf9ccf0b-part16', 'scsi-SQEMU_QEMU_HARDDISK_0b0d1c85-8aad-4201-aadd-214ecf9ccf0b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.045629 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--9b63b326--8bb9--546b--aabb--a628fef076ec-osd--block--9b63b326--8bb9--546b--aabb--a628fef076ec'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3CXOmo-fjPi-sB7K-8cPd-5gdl-1Eim-RcVjLf', 'scsi-0QEMU_QEMU_HARDDISK_bcde85c0-b124-4268-b34b-cc4a07cfe72d', 'scsi-SQEMU_QEMU_HARDDISK_bcde85c0-b124-4268-b34b-cc4a07cfe72d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.045654 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--cc420972--ce44--5a44--a5a6--a707e77471c5-osd--block--cc420972--ce44--5a44--a5a6--a707e77471c5', 'dm-uuid-LVM-5y6etqdOybvwL8SKpqd9lO6ea8AihF6ogUllt99DApL2987EmbvRNCTuGEj3rZSj'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.045660 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b6ae7fca--c2f2--5e20--af6f--426bd4b4cc4c-osd--block--b6ae7fca--c2f2--5e20--af6f--426bd4b4cc4c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-NUQqtM-WhHN-H4hZ-NI7i-o5vB-47E9-oLkBA6', 'scsi-0QEMU_QEMU_HARDDISK_99050707-7ba3-43f8-b640-7ac26fbd844b', 'scsi-SQEMU_QEMU_HARDDISK_99050707-7ba3-43f8-b640-7ac26fbd844b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.045668 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--62cfaa39--e4fc--5ede--b6ae--ee7ea3f2ad3e-osd--block--62cfaa39--e4fc--5ede--b6ae--ee7ea3f2ad3e', 'dm-uuid-LVM-XtuO3kY5fz70u4PVT7kRjLID7YPzDdlaHpmcICK37hKMr5v7VPxurWIVpPi7MnTe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.045673 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ca851d29-aa00-48c4-a2d0-a646814f4a41', 'scsi-SQEMU_QEMU_HARDDISK_ca851d29-aa00-48c4-a2d0-a646814f4a41'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.045677 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.045688 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-03-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.153231 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.153331 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.153337 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.153341 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.153346 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:15:51.153352 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.153356 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.153383 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.153394 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3250b0f8-cf47-4b18-9931-22a1ebe34c49', 'scsi-SQEMU_QEMU_HARDDISK_3250b0f8-cf47-4b18-9931-22a1ebe34c49'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3250b0f8-cf47-4b18-9931-22a1ebe34c49-part1', 'scsi-SQEMU_QEMU_HARDDISK_3250b0f8-cf47-4b18-9931-22a1ebe34c49-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3250b0f8-cf47-4b18-9931-22a1ebe34c49-part14', 'scsi-SQEMU_QEMU_HARDDISK_3250b0f8-cf47-4b18-9931-22a1ebe34c49-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3250b0f8-cf47-4b18-9931-22a1ebe34c49-part15', 'scsi-SQEMU_QEMU_HARDDISK_3250b0f8-cf47-4b18-9931-22a1ebe34c49-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3250b0f8-cf47-4b18-9931-22a1ebe34c49-part16', 'scsi-SQEMU_QEMU_HARDDISK_3250b0f8-cf47-4b18-9931-22a1ebe34c49-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.153401 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--cc420972--ce44--5a44--a5a6--a707e77471c5-osd--block--cc420972--ce44--5a44--a5a6--a707e77471c5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-NROBWj-Y4k8-CNdZ-qqy0-C4t6-f7oi-swyzyZ', 'scsi-0QEMU_QEMU_HARDDISK_9f2df327-5b12-4442-ac27-592210953f70', 'scsi-SQEMU_QEMU_HARDDISK_9f2df327-5b12-4442-ac27-592210953f70'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.153412 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--62cfaa39--e4fc--5ede--b6ae--ee7ea3f2ad3e-osd--block--62cfaa39--e4fc--5ede--b6ae--ee7ea3f2ad3e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-humsa4-KTgB-CbrW-Qmdc-6Kz2-zs64-ZjfIZb', 'scsi-0QEMU_QEMU_HARDDISK_ead21d4d-eccd-4cd4-b0bf-ce9a2f7ae522', 'scsi-SQEMU_QEMU_HARDDISK_ead21d4d-eccd-4cd4-b0bf-ce9a2f7ae522'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.343413 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--13a82a55--1430--5b0a--a1a4--baa9d6ca4414-osd--block--13a82a55--1430--5b0a--a1a4--baa9d6ca4414', 'dm-uuid-LVM-N6IGAFcTK4f0RIoIL68bIa5oeOtjeq5VPt3zysJ6uusfwuUnDTnTWFIlh4KrifZL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.343530 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e0b145f-2bfd-4824-bc37-4d4082c6f3f3', 'scsi-SQEMU_QEMU_HARDDISK_6e0b145f-2bfd-4824-bc37-4d4082c6f3f3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.343598 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--124df3d1--788c--586c--b42c--9b6f84a90775-osd--block--124df3d1--788c--586c--b42c--9b6f84a90775', 'dm-uuid-LVM-esS8nIABj2XOT7SZaVlhCHBSO01PfHEXG2YstjcMIJDQ3Sk02xtXf1d3vB4hoV1I'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.343612 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-03-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.343637 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.343691 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.343705 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.343716 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.343728 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.343768 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.343789 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.343816 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.343977 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_994d72d0-f7fa-4ba3-8a27-05b8bd26fa8b', 'scsi-SQEMU_QEMU_HARDDISK_994d72d0-f7fa-4ba3-8a27-05b8bd26fa8b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_994d72d0-f7fa-4ba3-8a27-05b8bd26fa8b-part1', 'scsi-SQEMU_QEMU_HARDDISK_994d72d0-f7fa-4ba3-8a27-05b8bd26fa8b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_994d72d0-f7fa-4ba3-8a27-05b8bd26fa8b-part14', 'scsi-SQEMU_QEMU_HARDDISK_994d72d0-f7fa-4ba3-8a27-05b8bd26fa8b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_994d72d0-f7fa-4ba3-8a27-05b8bd26fa8b-part15', 'scsi-SQEMU_QEMU_HARDDISK_994d72d0-f7fa-4ba3-8a27-05b8bd26fa8b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_994d72d0-f7fa-4ba3-8a27-05b8bd26fa8b-part16', 'scsi-SQEMU_QEMU_HARDDISK_994d72d0-f7fa-4ba3-8a27-05b8bd26fa8b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.422398 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:15:51.422510 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--13a82a55--1430--5b0a--a1a4--baa9d6ca4414-osd--block--13a82a55--1430--5b0a--a1a4--baa9d6ca4414'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WJQI6q-2zyD-20jq-Oozc-DwSo-tfBz-Pdgnig', 'scsi-0QEMU_QEMU_HARDDISK_09f09123-b92e-4af4-8119-7d25e215193b', 'scsi-SQEMU_QEMU_HARDDISK_09f09123-b92e-4af4-8119-7d25e215193b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.422628 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--124df3d1--788c--586c--b42c--9b6f84a90775-osd--block--124df3d1--788c--586c--b42c--9b6f84a90775'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-sOK4iT-Id2D-YriF-inFH-U9JU-3Vhu-dR9bqV', 'scsi-0QEMU_QEMU_HARDDISK_1d3cc069-e4cd-473c-8ec3-e2e615e111a0', 'scsi-SQEMU_QEMU_HARDDISK_1d3cc069-e4cd-473c-8ec3-e2e615e111a0'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.422679 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f88ade1-67f9-419a-b69f-9c70a1e62aa2', 'scsi-SQEMU_QEMU_HARDDISK_6f88ade1-67f9-419a-b69f-9c70a1e62aa2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.422693 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-03-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.422726 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.422740 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.422758 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.422777 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.422788 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.422800 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.422811 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.422831 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.577729 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78340717-3c88-4c84-83f5-f931035dba88', 'scsi-SQEMU_QEMU_HARDDISK_78340717-3c88-4c84-83f5-f931035dba88'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78340717-3c88-4c84-83f5-f931035dba88-part1', 'scsi-SQEMU_QEMU_HARDDISK_78340717-3c88-4c84-83f5-f931035dba88-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78340717-3c88-4c84-83f5-f931035dba88-part14', 'scsi-SQEMU_QEMU_HARDDISK_78340717-3c88-4c84-83f5-f931035dba88-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78340717-3c88-4c84-83f5-f931035dba88-part15', 'scsi-SQEMU_QEMU_HARDDISK_78340717-3c88-4c84-83f5-f931035dba88-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78340717-3c88-4c84-83f5-f931035dba88-part16', 'scsi-SQEMU_QEMU_HARDDISK_78340717-3c88-4c84-83f5-f931035dba88-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.577848 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-03-20-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.577862 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:15:51.577873 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.577900 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.577910 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.577931 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.577940 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.577948 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.577956 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.577965 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.577986 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d306408f-4477-48ac-bb84-69da99054fdf', 'scsi-SQEMU_QEMU_HARDDISK_d306408f-4477-48ac-bb84-69da99054fdf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d306408f-4477-48ac-bb84-69da99054fdf-part1', 'scsi-SQEMU_QEMU_HARDDISK_d306408f-4477-48ac-bb84-69da99054fdf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d306408f-4477-48ac-bb84-69da99054fdf-part14', 'scsi-SQEMU_QEMU_HARDDISK_d306408f-4477-48ac-bb84-69da99054fdf-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d306408f-4477-48ac-bb84-69da99054fdf-part15', 'scsi-SQEMU_QEMU_HARDDISK_d306408f-4477-48ac-bb84-69da99054fdf-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d306408f-4477-48ac-bb84-69da99054fdf-part16', 'scsi-SQEMU_QEMU_HARDDISK_d306408f-4477-48ac-bb84-69da99054fdf-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.808824 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-03-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.808941 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:15:51.808962 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:15:51.808974 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.808983 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.808990 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.809030 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.809037 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.809062 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.809070 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.809076 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.809090 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7efae86a-8b4f-4401-bee7-529c12412766', 'scsi-SQEMU_QEMU_HARDDISK_7efae86a-8b4f-4401-bee7-529c12412766'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7efae86a-8b4f-4401-bee7-529c12412766-part1', 'scsi-SQEMU_QEMU_HARDDISK_7efae86a-8b4f-4401-bee7-529c12412766-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7efae86a-8b4f-4401-bee7-529c12412766-part14', 'scsi-SQEMU_QEMU_HARDDISK_7efae86a-8b4f-4401-bee7-529c12412766-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7efae86a-8b4f-4401-bee7-529c12412766-part15', 'scsi-SQEMU_QEMU_HARDDISK_7efae86a-8b4f-4401-bee7-529c12412766-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7efae86a-8b4f-4401-bee7-529c12412766-part16', 'scsi-SQEMU_QEMU_HARDDISK_7efae86a-8b4f-4401-bee7-529c12412766-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:15:51.809115 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-03-22-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:16:04.148336 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:16:04.148444 | orchestrator | 2026-01-05 01:16:04.148457 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-01-05 01:16:04.148468 | orchestrator | Monday 05 January 2026 01:15:51 +0000 (0:00:01.063) 0:00:26.620 ******** 2026-01-05 01:16:04.148475 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:16:04.148482 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:16:04.148489 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:16:04.148495 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:16:04.148501 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:16:04.148508 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:16:04.148514 | orchestrator | 2026-01-05 01:16:04.148572 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-01-05 01:16:04.148580 | orchestrator | Monday 05 January 2026 01:15:53 +0000 (0:00:01.266) 0:00:27.887 ******** 2026-01-05 01:16:04.148586 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:16:04.148592 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:16:04.148598 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:16:04.148604 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:16:04.148611 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:16:04.148618 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:16:04.148625 | orchestrator | 2026-01-05 01:16:04.148652 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-05 01:16:04.148660 | orchestrator | Monday 05 January 2026 01:15:53 +0000 (0:00:00.648) 0:00:28.535 ******** 2026-01-05 01:16:04.148667 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:16:04.148673 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:16:04.148680 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:16:04.148687 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:16:04.148694 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:16:04.148701 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:16:04.148708 | orchestrator | 2026-01-05 01:16:04.148715 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-05 01:16:04.148721 | orchestrator | Monday 05 January 2026 01:15:54 +0000 (0:00:00.854) 0:00:29.390 ******** 2026-01-05 01:16:04.148728 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:16:04.148735 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:16:04.148742 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:16:04.148749 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:16:04.148756 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:16:04.148762 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:16:04.148769 | orchestrator | 2026-01-05 01:16:04.148776 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-05 01:16:04.148782 | orchestrator | Monday 05 January 2026 01:15:55 +0000 (0:00:00.606) 0:00:29.997 ******** 2026-01-05 01:16:04.148789 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:16:04.148796 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:16:04.148803 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:16:04.148809 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:16:04.148816 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:16:04.148823 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:16:04.148829 | orchestrator | 2026-01-05 01:16:04.148836 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-05 01:16:04.148843 | orchestrator | Monday 05 January 2026 01:15:56 +0000 (0:00:00.876) 0:00:30.873 ******** 2026-01-05 01:16:04.148849 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:16:04.148856 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:16:04.148862 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:16:04.148869 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:16:04.148891 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:16:04.148900 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:16:04.148909 | orchestrator | 2026-01-05 01:16:04.148919 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-01-05 01:16:04.148932 | orchestrator | Monday 05 January 2026 01:15:56 +0000 (0:00:00.651) 0:00:31.525 ******** 2026-01-05 01:16:04.148942 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-01-05 01:16:04.148950 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-01-05 01:16:04.148957 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-01-05 01:16:04.148964 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-01-05 01:16:04.148973 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-01-05 01:16:04.148981 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-01-05 01:16:04.148989 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-01-05 01:16:04.148997 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-05 01:16:04.149005 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-01-05 01:16:04.149012 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-01-05 01:16:04.149019 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-01-05 01:16:04.149026 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-01-05 01:16:04.149033 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-01-05 01:16:04.149040 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-01-05 01:16:04.149047 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-01-05 01:16:04.149061 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-01-05 01:16:04.149068 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-01-05 01:16:04.149075 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-01-05 01:16:04.149083 | orchestrator | 2026-01-05 01:16:04.149090 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-01-05 01:16:04.149098 | orchestrator | Monday 05 January 2026 01:15:58 +0000 (0:00:01.592) 0:00:33.118 ******** 2026-01-05 01:16:04.149105 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-05 01:16:04.149114 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-05 01:16:04.149121 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-05 01:16:04.149128 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:16:04.149136 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-05 01:16:04.149143 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-05 01:16:04.149151 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-05 01:16:04.149177 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:16:04.149185 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-05 01:16:04.149192 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-05 01:16:04.149199 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-05 01:16:04.149207 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:16:04.149214 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-05 01:16:04.149222 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-05 01:16:04.149229 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-05 01:16:04.149236 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:16:04.149244 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-01-05 01:16:04.149251 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-01-05 01:16:04.149258 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-01-05 01:16:04.149266 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:16:04.149273 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-01-05 01:16:04.149280 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-01-05 01:16:04.149287 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-01-05 01:16:04.149294 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:16:04.149300 | orchestrator | 2026-01-05 01:16:04.149307 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-01-05 01:16:04.149314 | orchestrator | Monday 05 January 2026 01:15:59 +0000 (0:00:00.722) 0:00:33.841 ******** 2026-01-05 01:16:04.149321 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:16:04.149327 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:16:04.149334 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:16:04.149342 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:16:04.149349 | orchestrator | 2026-01-05 01:16:04.149357 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-05 01:16:04.149366 | orchestrator | Monday 05 January 2026 01:16:00 +0000 (0:00:01.095) 0:00:34.936 ******** 2026-01-05 01:16:04.149372 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:16:04.149379 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:16:04.149386 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:16:04.149393 | orchestrator | 2026-01-05 01:16:04.149400 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-05 01:16:04.149406 | orchestrator | Monday 05 January 2026 01:16:00 +0000 (0:00:00.378) 0:00:35.315 ******** 2026-01-05 01:16:04.149413 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:16:04.149420 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:16:04.149432 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:16:04.149439 | orchestrator | 2026-01-05 01:16:04.149446 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-05 01:16:04.149453 | orchestrator | Monday 05 January 2026 01:16:01 +0000 (0:00:00.565) 0:00:35.880 ******** 2026-01-05 01:16:04.149460 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:16:04.149466 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:16:04.149472 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:16:04.149479 | orchestrator | 2026-01-05 01:16:04.149490 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-05 01:16:04.149497 | orchestrator | Monday 05 January 2026 01:16:01 +0000 (0:00:00.352) 0:00:36.233 ******** 2026-01-05 01:16:04.149504 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:16:04.149511 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:16:04.149566 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:16:04.149575 | orchestrator | 2026-01-05 01:16:04.149581 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-05 01:16:04.149587 | orchestrator | Monday 05 January 2026 01:16:01 +0000 (0:00:00.444) 0:00:36.677 ******** 2026-01-05 01:16:04.149593 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 01:16:04.149599 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 01:16:04.149606 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 01:16:04.149613 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:16:04.149620 | orchestrator | 2026-01-05 01:16:04.149626 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-05 01:16:04.149633 | orchestrator | Monday 05 January 2026 01:16:02 +0000 (0:00:00.402) 0:00:37.079 ******** 2026-01-05 01:16:04.149640 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 01:16:04.149647 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 01:16:04.149654 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 01:16:04.149661 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:16:04.149668 | orchestrator | 2026-01-05 01:16:04.149675 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-05 01:16:04.149682 | orchestrator | Monday 05 January 2026 01:16:02 +0000 (0:00:00.643) 0:00:37.722 ******** 2026-01-05 01:16:04.149688 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 01:16:04.149695 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 01:16:04.149702 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 01:16:04.149709 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:16:04.149715 | orchestrator | 2026-01-05 01:16:04.149722 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-05 01:16:04.149729 | orchestrator | Monday 05 January 2026 01:16:03 +0000 (0:00:00.655) 0:00:38.378 ******** 2026-01-05 01:16:04.149736 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:16:04.149743 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:16:04.149749 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:16:04.149756 | orchestrator | 2026-01-05 01:16:04.149763 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-05 01:16:04.149777 | orchestrator | Monday 05 January 2026 01:16:04 +0000 (0:00:00.575) 0:00:38.953 ******** 2026-01-05 01:16:23.935782 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-05 01:16:23.935921 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-05 01:16:23.935941 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-05 01:16:23.935953 | orchestrator | 2026-01-05 01:16:23.935965 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-01-05 01:16:23.935979 | orchestrator | Monday 05 January 2026 01:16:04 +0000 (0:00:00.609) 0:00:39.563 ******** 2026-01-05 01:16:23.935990 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-05 01:16:23.936001 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-05 01:16:23.936036 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-05 01:16:23.936044 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-05 01:16:23.936051 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-05 01:16:23.936057 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-05 01:16:23.936064 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-05 01:16:23.936070 | orchestrator | 2026-01-05 01:16:23.936076 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-01-05 01:16:23.936082 | orchestrator | Monday 05 January 2026 01:16:05 +0000 (0:00:01.070) 0:00:40.634 ******** 2026-01-05 01:16:23.936090 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-05 01:16:23.936096 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-05 01:16:23.936102 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-05 01:16:23.936108 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-05 01:16:23.936115 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-05 01:16:23.936121 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-05 01:16:23.936127 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-05 01:16:23.936133 | orchestrator | 2026-01-05 01:16:23.936139 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-05 01:16:23.936145 | orchestrator | Monday 05 January 2026 01:16:07 +0000 (0:00:02.084) 0:00:42.719 ******** 2026-01-05 01:16:23.936152 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:16:23.936160 | orchestrator | 2026-01-05 01:16:23.936166 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-05 01:16:23.936172 | orchestrator | Monday 05 January 2026 01:16:09 +0000 (0:00:01.208) 0:00:43.927 ******** 2026-01-05 01:16:23.936191 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:16:23.936198 | orchestrator | 2026-01-05 01:16:23.936204 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-05 01:16:23.936210 | orchestrator | Monday 05 January 2026 01:16:10 +0000 (0:00:01.236) 0:00:45.164 ******** 2026-01-05 01:16:23.936216 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:16:23.936223 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:16:23.936229 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:16:23.936235 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:16:23.936241 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:16:23.936247 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:16:23.936253 | orchestrator | 2026-01-05 01:16:23.936260 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-05 01:16:23.936266 | orchestrator | Monday 05 January 2026 01:16:11 +0000 (0:00:01.046) 0:00:46.211 ******** 2026-01-05 01:16:23.936272 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:16:23.936278 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:16:23.936284 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:16:23.936290 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:16:23.936297 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:16:23.936303 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:16:23.936309 | orchestrator | 2026-01-05 01:16:23.936315 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-05 01:16:23.936321 | orchestrator | Monday 05 January 2026 01:16:12 +0000 (0:00:00.884) 0:00:47.095 ******** 2026-01-05 01:16:23.936334 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:16:23.936341 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:16:23.936347 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:16:23.936353 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:16:23.936360 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:16:23.936371 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:16:23.936380 | orchestrator | 2026-01-05 01:16:23.936391 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-05 01:16:23.936401 | orchestrator | Monday 05 January 2026 01:16:13 +0000 (0:00:00.761) 0:00:47.856 ******** 2026-01-05 01:16:23.936411 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:16:23.936421 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:16:23.936431 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:16:23.936441 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:16:23.936452 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:16:23.936464 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:16:23.936474 | orchestrator | 2026-01-05 01:16:23.936486 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-05 01:16:23.936517 | orchestrator | Monday 05 January 2026 01:16:13 +0000 (0:00:00.880) 0:00:48.737 ******** 2026-01-05 01:16:23.936525 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:16:23.936533 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:16:23.936560 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:16:23.936567 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:16:23.936574 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:16:23.936582 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:16:23.936589 | orchestrator | 2026-01-05 01:16:23.936597 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-05 01:16:23.936605 | orchestrator | Monday 05 January 2026 01:16:14 +0000 (0:00:01.022) 0:00:49.760 ******** 2026-01-05 01:16:23.936612 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:16:23.936619 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:16:23.936626 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:16:23.936635 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:16:23.936646 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:16:23.936656 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:16:23.936667 | orchestrator | 2026-01-05 01:16:23.936677 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-05 01:16:23.936686 | orchestrator | Monday 05 January 2026 01:16:15 +0000 (0:00:00.898) 0:00:50.658 ******** 2026-01-05 01:16:23.936697 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:16:23.936706 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:16:23.936716 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:16:23.936727 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:16:23.936738 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:16:23.936748 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:16:23.936757 | orchestrator | 2026-01-05 01:16:23.936769 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-05 01:16:23.936775 | orchestrator | Monday 05 January 2026 01:16:16 +0000 (0:00:00.629) 0:00:51.288 ******** 2026-01-05 01:16:23.936782 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:16:23.936788 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:16:23.936794 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:16:23.936800 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:16:23.936806 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:16:23.936812 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:16:23.936818 | orchestrator | 2026-01-05 01:16:23.936825 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-05 01:16:23.936831 | orchestrator | Monday 05 January 2026 01:16:17 +0000 (0:00:01.365) 0:00:52.653 ******** 2026-01-05 01:16:23.936837 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:16:23.936843 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:16:23.936849 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:16:23.936855 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:16:23.936861 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:16:23.936875 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:16:23.936881 | orchestrator | 2026-01-05 01:16:23.936887 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-05 01:16:23.936893 | orchestrator | Monday 05 January 2026 01:16:18 +0000 (0:00:01.045) 0:00:53.699 ******** 2026-01-05 01:16:23.936899 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:16:23.936906 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:16:23.936912 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:16:23.936918 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:16:23.936924 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:16:23.936930 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:16:23.936936 | orchestrator | 2026-01-05 01:16:23.936942 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-05 01:16:23.936949 | orchestrator | Monday 05 January 2026 01:16:19 +0000 (0:00:00.877) 0:00:54.577 ******** 2026-01-05 01:16:23.936955 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:16:23.936961 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:16:23.936972 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:16:23.936978 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:16:23.936984 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:16:23.936990 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:16:23.936996 | orchestrator | 2026-01-05 01:16:23.937003 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-05 01:16:23.937009 | orchestrator | Monday 05 January 2026 01:16:20 +0000 (0:00:00.656) 0:00:55.234 ******** 2026-01-05 01:16:23.937015 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:16:23.937021 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:16:23.937027 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:16:23.937033 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:16:23.937040 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:16:23.937046 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:16:23.937052 | orchestrator | 2026-01-05 01:16:23.937058 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-05 01:16:23.937064 | orchestrator | Monday 05 January 2026 01:16:21 +0000 (0:00:00.869) 0:00:56.103 ******** 2026-01-05 01:16:23.937070 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:16:23.937076 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:16:23.937082 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:16:23.937088 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:16:23.937094 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:16:23.937100 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:16:23.937107 | orchestrator | 2026-01-05 01:16:23.937113 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-05 01:16:23.937119 | orchestrator | Monday 05 January 2026 01:16:21 +0000 (0:00:00.615) 0:00:56.718 ******** 2026-01-05 01:16:23.937125 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:16:23.937131 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:16:23.937137 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:16:23.937144 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:16:23.937150 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:16:23.937156 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:16:23.937162 | orchestrator | 2026-01-05 01:16:23.937168 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-05 01:16:23.937174 | orchestrator | Monday 05 January 2026 01:16:22 +0000 (0:00:00.910) 0:00:57.629 ******** 2026-01-05 01:16:23.937180 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:16:23.937186 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:16:23.937192 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:16:23.937198 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:16:23.937204 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:16:23.937210 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:16:23.937216 | orchestrator | 2026-01-05 01:16:23.937223 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-05 01:16:23.937236 | orchestrator | Monday 05 January 2026 01:16:23 +0000 (0:00:00.822) 0:00:58.451 ******** 2026-01-05 01:16:23.937242 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:16:23.937254 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:17:32.209248 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:17:32.209335 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:17:32.209345 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:17:32.209352 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:17:32.209359 | orchestrator | 2026-01-05 01:17:32.209367 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-05 01:17:32.209376 | orchestrator | Monday 05 January 2026 01:16:24 +0000 (0:00:00.619) 0:00:59.071 ******** 2026-01-05 01:17:32.209383 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:17:32.209387 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:17:32.209391 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:17:32.209396 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:17:32.209401 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:17:32.209405 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:17:32.209452 | orchestrator | 2026-01-05 01:17:32.209457 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-05 01:17:32.209462 | orchestrator | Monday 05 January 2026 01:16:25 +0000 (0:00:00.870) 0:00:59.941 ******** 2026-01-05 01:17:32.209466 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:17:32.209470 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:17:32.209474 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:17:32.209478 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:17:32.209482 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:17:32.209486 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:17:32.209490 | orchestrator | 2026-01-05 01:17:32.209494 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-05 01:17:32.209498 | orchestrator | Monday 05 January 2026 01:16:25 +0000 (0:00:00.681) 0:01:00.623 ******** 2026-01-05 01:17:32.209502 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:17:32.209506 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:17:32.209510 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:17:32.209514 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:17:32.209517 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:17:32.209521 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:17:32.209525 | orchestrator | 2026-01-05 01:17:32.209529 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-01-05 01:17:32.209533 | orchestrator | Monday 05 January 2026 01:16:27 +0000 (0:00:01.298) 0:01:01.921 ******** 2026-01-05 01:17:32.209537 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:17:32.209541 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:17:32.209545 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:17:32.209548 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:17:32.209552 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:17:32.209556 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:17:32.209560 | orchestrator | 2026-01-05 01:17:32.209564 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-01-05 01:17:32.209568 | orchestrator | Monday 05 January 2026 01:16:28 +0000 (0:00:01.765) 0:01:03.687 ******** 2026-01-05 01:17:32.209571 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:17:32.209575 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:17:32.209579 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:17:32.209583 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:17:32.209587 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:17:32.209591 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:17:32.209594 | orchestrator | 2026-01-05 01:17:32.209598 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-01-05 01:17:32.209602 | orchestrator | Monday 05 January 2026 01:16:31 +0000 (0:00:02.324) 0:01:06.011 ******** 2026-01-05 01:17:32.209620 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:17:32.209643 | orchestrator | 2026-01-05 01:17:32.209647 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-01-05 01:17:32.209651 | orchestrator | Monday 05 January 2026 01:16:32 +0000 (0:00:01.335) 0:01:07.346 ******** 2026-01-05 01:17:32.209654 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:17:32.209658 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:17:32.209662 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:17:32.209666 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:17:32.209669 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:17:32.209673 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:17:32.209677 | orchestrator | 2026-01-05 01:17:32.209680 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-01-05 01:17:32.209684 | orchestrator | Monday 05 January 2026 01:16:33 +0000 (0:00:00.901) 0:01:08.248 ******** 2026-01-05 01:17:32.209688 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:17:32.209692 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:17:32.209696 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:17:32.209699 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:17:32.209703 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:17:32.209707 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:17:32.209710 | orchestrator | 2026-01-05 01:17:32.209714 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-01-05 01:17:32.209718 | orchestrator | Monday 05 January 2026 01:16:34 +0000 (0:00:00.606) 0:01:08.855 ******** 2026-01-05 01:17:32.209722 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-05 01:17:32.209725 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-05 01:17:32.209729 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-05 01:17:32.209734 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-05 01:17:32.209742 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-05 01:17:32.209747 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-05 01:17:32.209755 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-05 01:17:32.209763 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-05 01:17:32.209770 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-05 01:17:32.209791 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-05 01:17:32.209797 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-05 01:17:32.209803 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-05 01:17:32.209809 | orchestrator | 2026-01-05 01:17:32.209815 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-01-05 01:17:32.209821 | orchestrator | Monday 05 January 2026 01:16:35 +0000 (0:00:01.608) 0:01:10.463 ******** 2026-01-05 01:17:32.209827 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:17:32.209832 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:17:32.209839 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:17:32.209844 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:17:32.209850 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:17:32.209857 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:17:32.209862 | orchestrator | 2026-01-05 01:17:32.209869 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-01-05 01:17:32.209875 | orchestrator | Monday 05 January 2026 01:16:36 +0000 (0:00:00.973) 0:01:11.437 ******** 2026-01-05 01:17:32.209881 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:17:32.209886 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:17:32.209890 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:17:32.209895 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:17:32.209905 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:17:32.209909 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:17:32.209913 | orchestrator | 2026-01-05 01:17:32.209918 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-01-05 01:17:32.209922 | orchestrator | Monday 05 January 2026 01:16:37 +0000 (0:00:00.877) 0:01:12.314 ******** 2026-01-05 01:17:32.209926 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:17:32.209931 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:17:32.209935 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:17:32.209939 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:17:32.209944 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:17:32.209948 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:17:32.209952 | orchestrator | 2026-01-05 01:17:32.209956 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-01-05 01:17:32.209961 | orchestrator | Monday 05 January 2026 01:16:38 +0000 (0:00:00.605) 0:01:12.919 ******** 2026-01-05 01:17:32.209965 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:17:32.209969 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:17:32.209974 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:17:32.209978 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:17:32.209983 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:17:32.209987 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:17:32.209992 | orchestrator | 2026-01-05 01:17:32.209996 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-01-05 01:17:32.210000 | orchestrator | Monday 05 January 2026 01:16:38 +0000 (0:00:00.819) 0:01:13.739 ******** 2026-01-05 01:17:32.210006 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:17:32.210010 | orchestrator | 2026-01-05 01:17:32.210050 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-01-05 01:17:32.210059 | orchestrator | Monday 05 January 2026 01:16:40 +0000 (0:00:01.265) 0:01:15.004 ******** 2026-01-05 01:17:32.210063 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:17:32.210068 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:17:32.210073 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:17:32.210077 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:17:32.210081 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:17:32.210086 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:17:32.210090 | orchestrator | 2026-01-05 01:17:32.210095 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-01-05 01:17:32.210099 | orchestrator | Monday 05 January 2026 01:17:31 +0000 (0:00:50.990) 0:02:05.995 ******** 2026-01-05 01:17:32.210103 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-05 01:17:32.210107 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-05 01:17:32.210110 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-05 01:17:32.210114 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:17:32.210118 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-05 01:17:32.210122 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-05 01:17:32.210125 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-05 01:17:32.210129 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:17:32.210133 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-05 01:17:32.210137 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-05 01:17:32.210141 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-05 01:17:32.210144 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:17:32.210148 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-05 01:17:32.210156 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-05 01:17:32.210160 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-05 01:17:32.210163 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:17:32.210167 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-05 01:17:32.210171 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-05 01:17:32.210175 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-05 01:17:32.210183 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:17:57.491854 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-05 01:17:57.491947 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-05 01:17:57.491956 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-05 01:17:57.491963 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:17:57.491969 | orchestrator | 2026-01-05 01:17:57.491976 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-01-05 01:17:57.491982 | orchestrator | Monday 05 January 2026 01:17:32 +0000 (0:00:01.022) 0:02:07.017 ******** 2026-01-05 01:17:57.491988 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:17:57.491993 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:17:57.491999 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:17:57.492005 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:17:57.492010 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:17:57.492016 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:17:57.492021 | orchestrator | 2026-01-05 01:17:57.492027 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-01-05 01:17:57.492032 | orchestrator | Monday 05 January 2026 01:17:32 +0000 (0:00:00.671) 0:02:07.689 ******** 2026-01-05 01:17:57.492038 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:17:57.492043 | orchestrator | 2026-01-05 01:17:57.492048 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-01-05 01:17:57.492054 | orchestrator | Monday 05 January 2026 01:17:33 +0000 (0:00:00.419) 0:02:08.109 ******** 2026-01-05 01:17:57.492059 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:17:57.492065 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:17:57.492070 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:17:57.492076 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:17:57.492081 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:17:57.492087 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:17:57.492092 | orchestrator | 2026-01-05 01:17:57.492097 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-01-05 01:17:57.492103 | orchestrator | Monday 05 January 2026 01:17:33 +0000 (0:00:00.688) 0:02:08.797 ******** 2026-01-05 01:17:57.492108 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:17:57.492114 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:17:57.492119 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:17:57.492124 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:17:57.492130 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:17:57.492135 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:17:57.492140 | orchestrator | 2026-01-05 01:17:57.492146 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-01-05 01:17:57.492151 | orchestrator | Monday 05 January 2026 01:17:34 +0000 (0:00:00.839) 0:02:09.637 ******** 2026-01-05 01:17:57.492157 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:17:57.492162 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:17:57.492167 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:17:57.492173 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:17:57.492178 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:17:57.492184 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:17:57.492189 | orchestrator | 2026-01-05 01:17:57.492212 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-01-05 01:17:57.492218 | orchestrator | Monday 05 January 2026 01:17:35 +0000 (0:00:00.674) 0:02:10.312 ******** 2026-01-05 01:17:57.492236 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:17:57.492243 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:17:57.492248 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:17:57.492254 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:17:57.492259 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:17:57.492264 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:17:57.492269 | orchestrator | 2026-01-05 01:17:57.492275 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-01-05 01:17:57.492280 | orchestrator | Monday 05 January 2026 01:17:39 +0000 (0:00:04.195) 0:02:14.508 ******** 2026-01-05 01:17:57.492286 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:17:57.492291 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:17:57.492296 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:17:57.492301 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:17:57.492307 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:17:57.492312 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:17:57.492317 | orchestrator | 2026-01-05 01:17:57.492323 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-01-05 01:17:57.492328 | orchestrator | Monday 05 January 2026 01:17:40 +0000 (0:00:00.625) 0:02:15.133 ******** 2026-01-05 01:17:57.492336 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:17:57.492343 | orchestrator | 2026-01-05 01:17:57.492348 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-01-05 01:17:57.492354 | orchestrator | Monday 05 January 2026 01:17:41 +0000 (0:00:01.290) 0:02:16.423 ******** 2026-01-05 01:17:57.492359 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:17:57.492364 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:17:57.492370 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:17:57.492375 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:17:57.492380 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:17:57.492386 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:17:57.492391 | orchestrator | 2026-01-05 01:17:57.492397 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-01-05 01:17:57.492402 | orchestrator | Monday 05 January 2026 01:17:42 +0000 (0:00:00.871) 0:02:17.294 ******** 2026-01-05 01:17:57.492407 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:17:57.492413 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:17:57.492418 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:17:57.492423 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:17:57.492429 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:17:57.492434 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:17:57.492439 | orchestrator | 2026-01-05 01:17:57.492445 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-01-05 01:17:57.492450 | orchestrator | Monday 05 January 2026 01:17:43 +0000 (0:00:00.674) 0:02:17.968 ******** 2026-01-05 01:17:57.492455 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:17:57.492484 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:17:57.492497 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:17:57.492505 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:17:57.492514 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:17:57.492523 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:17:57.492531 | orchestrator | 2026-01-05 01:17:57.492539 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-01-05 01:17:57.492548 | orchestrator | Monday 05 January 2026 01:17:44 +0000 (0:00:00.939) 0:02:18.908 ******** 2026-01-05 01:17:57.492556 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:17:57.492565 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:17:57.492574 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:17:57.492605 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:17:57.492622 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:17:57.492630 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:17:57.492638 | orchestrator | 2026-01-05 01:17:57.492646 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-01-05 01:17:57.492655 | orchestrator | Monday 05 January 2026 01:17:44 +0000 (0:00:00.637) 0:02:19.546 ******** 2026-01-05 01:17:57.492663 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:17:57.492671 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:17:57.492679 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:17:57.492687 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:17:57.492696 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:17:57.492704 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:17:57.492712 | orchestrator | 2026-01-05 01:17:57.492721 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-01-05 01:17:57.492729 | orchestrator | Monday 05 January 2026 01:17:45 +0000 (0:00:00.907) 0:02:20.453 ******** 2026-01-05 01:17:57.492738 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:17:57.492747 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:17:57.492755 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:17:57.492763 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:17:57.492771 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:17:57.492780 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:17:57.492789 | orchestrator | 2026-01-05 01:17:57.492797 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-01-05 01:17:57.492806 | orchestrator | Monday 05 January 2026 01:17:46 +0000 (0:00:00.663) 0:02:21.117 ******** 2026-01-05 01:17:57.492813 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:17:57.492822 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:17:57.492830 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:17:57.492838 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:17:57.492847 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:17:57.492855 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:17:57.492863 | orchestrator | 2026-01-05 01:17:57.492872 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-01-05 01:17:57.492880 | orchestrator | Monday 05 January 2026 01:17:47 +0000 (0:00:00.864) 0:02:21.982 ******** 2026-01-05 01:17:57.492888 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:17:57.492896 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:17:57.492905 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:17:57.492913 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:17:57.492921 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:17:57.492930 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:17:57.492938 | orchestrator | 2026-01-05 01:17:57.492946 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-01-05 01:17:57.492959 | orchestrator | Monday 05 January 2026 01:17:47 +0000 (0:00:00.719) 0:02:22.702 ******** 2026-01-05 01:17:57.492967 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:17:57.492976 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:17:57.492984 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:17:57.492992 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:17:57.493001 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:17:57.493008 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:17:57.493017 | orchestrator | 2026-01-05 01:17:57.493025 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-01-05 01:17:57.493033 | orchestrator | Monday 05 January 2026 01:17:49 +0000 (0:00:01.343) 0:02:24.045 ******** 2026-01-05 01:17:57.493043 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:17:57.493053 | orchestrator | 2026-01-05 01:17:57.493062 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-01-05 01:17:57.493070 | orchestrator | Monday 05 January 2026 01:17:50 +0000 (0:00:01.276) 0:02:25.322 ******** 2026-01-05 01:17:57.493085 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-01-05 01:17:57.493094 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-01-05 01:17:57.493102 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-01-05 01:17:57.493110 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-01-05 01:17:57.493119 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-01-05 01:17:57.493127 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-01-05 01:17:57.493135 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-01-05 01:17:57.493144 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-01-05 01:17:57.493152 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-01-05 01:17:57.493160 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-01-05 01:17:57.493168 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-01-05 01:17:57.493177 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-01-05 01:17:57.493186 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-01-05 01:17:57.493195 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-01-05 01:17:57.493203 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-01-05 01:17:57.493212 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-01-05 01:17:57.493222 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-01-05 01:17:57.493238 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-01-05 01:18:03.092314 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-01-05 01:18:03.092403 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-01-05 01:18:03.092419 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-01-05 01:18:03.092429 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-01-05 01:18:03.092437 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-01-05 01:18:03.092446 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-01-05 01:18:03.092455 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-01-05 01:18:03.092463 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-01-05 01:18:03.092472 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-01-05 01:18:03.092480 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-01-05 01:18:03.092489 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-01-05 01:18:03.092498 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-01-05 01:18:03.092507 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-01-05 01:18:03.092515 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-01-05 01:18:03.092523 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-01-05 01:18:03.092531 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-01-05 01:18:03.092538 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-01-05 01:18:03.092546 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-01-05 01:18:03.092554 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-01-05 01:18:03.092562 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-01-05 01:18:03.092570 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-01-05 01:18:03.092578 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-01-05 01:18:03.092586 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-05 01:18:03.092595 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-01-05 01:18:03.092604 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-01-05 01:18:03.092612 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-01-05 01:18:03.092664 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-01-05 01:18:03.092673 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-01-05 01:18:03.092682 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-05 01:18:03.092690 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-05 01:18:03.092697 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-01-05 01:18:03.092706 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-01-05 01:18:03.092715 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-01-05 01:18:03.092738 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-05 01:18:03.092747 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-05 01:18:03.092755 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-05 01:18:03.092764 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-05 01:18:03.092773 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-05 01:18:03.092782 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-05 01:18:03.092790 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-05 01:18:03.092798 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-05 01:18:03.092804 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-05 01:18:03.092809 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-05 01:18:03.092814 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-05 01:18:03.092819 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-05 01:18:03.092827 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-05 01:18:03.092834 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-05 01:18:03.092843 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-05 01:18:03.092851 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-05 01:18:03.092859 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-05 01:18:03.092867 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-05 01:18:03.092875 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-05 01:18:03.092884 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-05 01:18:03.092893 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-05 01:18:03.092902 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-05 01:18:03.092911 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-05 01:18:03.092920 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-05 01:18:03.092928 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-05 01:18:03.092956 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-05 01:18:03.092967 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-01-05 01:18:03.092976 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-05 01:18:03.092985 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-05 01:18:03.092994 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-05 01:18:03.093002 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-05 01:18:03.093011 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-01-05 01:18:03.093020 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-01-05 01:18:03.093037 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-05 01:18:03.093046 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-05 01:18:03.093053 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-05 01:18:03.093061 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-01-05 01:18:03.093068 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-01-05 01:18:03.093076 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-01-05 01:18:03.093083 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-01-05 01:18:03.093093 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-01-05 01:18:03.093102 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-01-05 01:18:03.093111 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-01-05 01:18:03.093120 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-01-05 01:18:03.093128 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-01-05 01:18:03.093137 | orchestrator | 2026-01-05 01:18:03.093147 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-01-05 01:18:03.093156 | orchestrator | Monday 05 January 2026 01:17:57 +0000 (0:00:06.974) 0:02:32.296 ******** 2026-01-05 01:18:03.093164 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:18:03.093173 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:18:03.093179 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:18:03.093185 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:18:03.093192 | orchestrator | 2026-01-05 01:18:03.093197 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-01-05 01:18:03.093202 | orchestrator | Monday 05 January 2026 01:17:58 +0000 (0:00:01.051) 0:02:33.347 ******** 2026-01-05 01:18:03.093207 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-05 01:18:03.093213 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-05 01:18:03.093223 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-05 01:18:03.093229 | orchestrator | 2026-01-05 01:18:03.093234 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-01-05 01:18:03.093239 | orchestrator | Monday 05 January 2026 01:17:59 +0000 (0:00:00.833) 0:02:34.181 ******** 2026-01-05 01:18:03.093244 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-05 01:18:03.093249 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-05 01:18:03.093254 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-05 01:18:03.093259 | orchestrator | 2026-01-05 01:18:03.093264 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-01-05 01:18:03.093269 | orchestrator | Monday 05 January 2026 01:18:00 +0000 (0:00:01.275) 0:02:35.457 ******** 2026-01-05 01:18:03.093275 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:18:03.093280 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:18:03.093285 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:18:03.093290 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:18:03.093295 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:18:03.093300 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:18:03.093306 | orchestrator | 2026-01-05 01:18:03.093311 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-01-05 01:18:03.093316 | orchestrator | Monday 05 January 2026 01:18:01 +0000 (0:00:00.895) 0:02:36.353 ******** 2026-01-05 01:18:03.093326 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:18:03.093331 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:18:03.093336 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:18:03.093341 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:18:03.093346 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:18:03.093351 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:18:03.093356 | orchestrator | 2026-01-05 01:18:03.093361 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-01-05 01:18:03.093366 | orchestrator | Monday 05 January 2026 01:18:02 +0000 (0:00:00.639) 0:02:36.992 ******** 2026-01-05 01:18:03.093371 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:18:03.093377 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:18:03.093382 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:18:03.093387 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:18:03.093392 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:18:03.093397 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:18:03.093402 | orchestrator | 2026-01-05 01:18:03.093412 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-01-05 01:18:16.751466 | orchestrator | Monday 05 January 2026 01:18:03 +0000 (0:00:00.908) 0:02:37.900 ******** 2026-01-05 01:18:16.751601 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:18:16.751628 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:18:16.751647 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:18:16.751666 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:18:16.751684 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:18:16.751702 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:18:16.751720 | orchestrator | 2026-01-05 01:18:16.751740 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-01-05 01:18:16.751818 | orchestrator | Monday 05 January 2026 01:18:03 +0000 (0:00:00.662) 0:02:38.563 ******** 2026-01-05 01:18:16.751830 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:18:16.751842 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:18:16.751853 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:18:16.751864 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:18:16.751875 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:18:16.751892 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:18:16.751916 | orchestrator | 2026-01-05 01:18:16.751946 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-01-05 01:18:16.751965 | orchestrator | Monday 05 January 2026 01:18:04 +0000 (0:00:00.838) 0:02:39.401 ******** 2026-01-05 01:18:16.751982 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:18:16.752000 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:18:16.752020 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:18:16.752039 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:18:16.752058 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:18:16.752076 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:18:16.752097 | orchestrator | 2026-01-05 01:18:16.752118 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-01-05 01:18:16.752138 | orchestrator | Monday 05 January 2026 01:18:05 +0000 (0:00:00.625) 0:02:40.026 ******** 2026-01-05 01:18:16.752155 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:18:16.752169 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:18:16.752182 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:18:16.752196 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:18:16.752208 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:18:16.752220 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:18:16.752233 | orchestrator | 2026-01-05 01:18:16.752246 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-01-05 01:18:16.752259 | orchestrator | Monday 05 January 2026 01:18:06 +0000 (0:00:00.858) 0:02:40.885 ******** 2026-01-05 01:18:16.752273 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:18:16.752314 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:18:16.752329 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:18:16.752341 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:18:16.752354 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:18:16.752368 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:18:16.752380 | orchestrator | 2026-01-05 01:18:16.752394 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-01-05 01:18:16.752407 | orchestrator | Monday 05 January 2026 01:18:06 +0000 (0:00:00.595) 0:02:41.481 ******** 2026-01-05 01:18:16.752440 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:18:16.752452 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:18:16.752465 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:18:16.752482 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:18:16.752502 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:18:16.752520 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:18:16.752538 | orchestrator | 2026-01-05 01:18:16.752565 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-01-05 01:18:16.752588 | orchestrator | Monday 05 January 2026 01:18:09 +0000 (0:00:03.095) 0:02:44.577 ******** 2026-01-05 01:18:16.752607 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:18:16.752626 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:18:16.752643 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:18:16.752660 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:18:16.752677 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:18:16.752695 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:18:16.752713 | orchestrator | 2026-01-05 01:18:16.752730 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-01-05 01:18:16.752780 | orchestrator | Monday 05 January 2026 01:18:10 +0000 (0:00:00.628) 0:02:45.205 ******** 2026-01-05 01:18:16.752800 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:18:16.752820 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:18:16.752839 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:18:16.752857 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:18:16.752876 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:18:16.752890 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:18:16.752901 | orchestrator | 2026-01-05 01:18:16.752912 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-01-05 01:18:16.752923 | orchestrator | Monday 05 January 2026 01:18:11 +0000 (0:00:00.929) 0:02:46.134 ******** 2026-01-05 01:18:16.752934 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:18:16.752945 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:18:16.752956 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:18:16.752967 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:18:16.752978 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:18:16.752988 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:18:16.752999 | orchestrator | 2026-01-05 01:18:16.753010 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-01-05 01:18:16.753022 | orchestrator | Monday 05 January 2026 01:18:12 +0000 (0:00:00.849) 0:02:46.984 ******** 2026-01-05 01:18:16.753033 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-05 01:18:16.753047 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-05 01:18:16.753059 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-05 01:18:16.753070 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:18:16.753105 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:18:16.753117 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:18:16.753128 | orchestrator | 2026-01-05 01:18:16.753139 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-01-05 01:18:16.753150 | orchestrator | Monday 05 January 2026 01:18:12 +0000 (0:00:00.640) 0:02:47.624 ******** 2026-01-05 01:18:16.753177 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-01-05 01:18:16.753192 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-01-05 01:18:16.753204 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:18:16.753216 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-01-05 01:18:16.753227 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-01-05 01:18:16.753238 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:18:16.753249 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-01-05 01:18:16.753270 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-01-05 01:18:16.753282 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:18:16.753293 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:18:16.753304 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:18:16.753315 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:18:16.753325 | orchestrator | 2026-01-05 01:18:16.753337 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-01-05 01:18:16.753348 | orchestrator | Monday 05 January 2026 01:18:13 +0000 (0:00:00.907) 0:02:48.532 ******** 2026-01-05 01:18:16.753359 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:18:16.753369 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:18:16.753380 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:18:16.753391 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:18:16.753402 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:18:16.753413 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:18:16.753423 | orchestrator | 2026-01-05 01:18:16.753435 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-01-05 01:18:16.753445 | orchestrator | Monday 05 January 2026 01:18:14 +0000 (0:00:00.643) 0:02:49.175 ******** 2026-01-05 01:18:16.753456 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:18:16.753467 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:18:16.753478 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:18:16.753489 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:18:16.753501 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:18:16.753520 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:18:16.753547 | orchestrator | 2026-01-05 01:18:16.753567 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-05 01:18:16.753584 | orchestrator | Monday 05 January 2026 01:18:15 +0000 (0:00:00.858) 0:02:50.034 ******** 2026-01-05 01:18:16.753614 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:18:16.753632 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:18:16.753650 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:18:16.753668 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:18:16.753686 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:18:16.753705 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:18:16.753725 | orchestrator | 2026-01-05 01:18:16.753744 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-05 01:18:16.753787 | orchestrator | Monday 05 January 2026 01:18:15 +0000 (0:00:00.645) 0:02:50.680 ******** 2026-01-05 01:18:16.753805 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:18:16.753822 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:18:16.753839 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:18:16.753854 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:18:16.753872 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:18:16.753889 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:18:16.753906 | orchestrator | 2026-01-05 01:18:16.753923 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-05 01:18:16.753955 | orchestrator | Monday 05 January 2026 01:18:16 +0000 (0:00:00.875) 0:02:51.555 ******** 2026-01-05 01:18:34.792874 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:18:34.793068 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:18:34.793084 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:18:34.793095 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:18:34.793103 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:18:34.793111 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:18:34.793130 | orchestrator | 2026-01-05 01:18:34.793140 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-05 01:18:34.793149 | orchestrator | Monday 05 January 2026 01:18:17 +0000 (0:00:00.684) 0:02:52.240 ******** 2026-01-05 01:18:34.793158 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:18:34.793167 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:18:34.793175 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:18:34.793183 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:18:34.793191 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:18:34.793199 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:18:34.793207 | orchestrator | 2026-01-05 01:18:34.793215 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-05 01:18:34.793223 | orchestrator | Monday 05 January 2026 01:18:18 +0000 (0:00:00.879) 0:02:53.120 ******** 2026-01-05 01:18:34.793231 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 01:18:34.793239 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 01:18:34.793247 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 01:18:34.793255 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:18:34.793262 | orchestrator | 2026-01-05 01:18:34.793270 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-05 01:18:34.793278 | orchestrator | Monday 05 January 2026 01:18:18 +0000 (0:00:00.435) 0:02:53.556 ******** 2026-01-05 01:18:34.793286 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 01:18:34.793294 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 01:18:34.793302 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 01:18:34.793310 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:18:34.793318 | orchestrator | 2026-01-05 01:18:34.793326 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-05 01:18:34.793334 | orchestrator | Monday 05 January 2026 01:18:19 +0000 (0:00:00.461) 0:02:54.017 ******** 2026-01-05 01:18:34.793342 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 01:18:34.793349 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 01:18:34.793358 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 01:18:34.793366 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:18:34.793397 | orchestrator | 2026-01-05 01:18:34.793406 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-05 01:18:34.793416 | orchestrator | Monday 05 January 2026 01:18:19 +0000 (0:00:00.431) 0:02:54.449 ******** 2026-01-05 01:18:34.793425 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:18:34.793435 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:18:34.793444 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:18:34.793454 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:18:34.793468 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:18:34.793499 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:18:34.793513 | orchestrator | 2026-01-05 01:18:34.793527 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-05 01:18:34.793541 | orchestrator | Monday 05 January 2026 01:18:20 +0000 (0:00:00.865) 0:02:55.314 ******** 2026-01-05 01:18:34.793556 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-05 01:18:34.793570 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-05 01:18:34.793582 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-05 01:18:34.793591 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-01-05 01:18:34.793601 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:18:34.793611 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-01-05 01:18:34.793621 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:18:34.793630 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-01-05 01:18:34.793640 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:18:34.793649 | orchestrator | 2026-01-05 01:18:34.793659 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-01-05 01:18:34.793668 | orchestrator | Monday 05 January 2026 01:18:22 +0000 (0:00:01.711) 0:02:57.025 ******** 2026-01-05 01:18:34.793681 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:18:34.793695 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:18:34.793708 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:18:34.793719 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:18:34.793732 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:18:34.793745 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:18:34.793759 | orchestrator | 2026-01-05 01:18:34.793774 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-05 01:18:34.793788 | orchestrator | Monday 05 January 2026 01:18:24 +0000 (0:00:02.403) 0:02:59.429 ******** 2026-01-05 01:18:34.793800 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:18:34.793808 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:18:34.793816 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:18:34.793824 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:18:34.793831 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:18:34.793839 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:18:34.793847 | orchestrator | 2026-01-05 01:18:34.793855 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-01-05 01:18:34.793863 | orchestrator | Monday 05 January 2026 01:18:25 +0000 (0:00:01.372) 0:03:00.801 ******** 2026-01-05 01:18:34.793870 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:18:34.793878 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:18:34.793886 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:18:34.793894 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:18:34.793987 | orchestrator | 2026-01-05 01:18:34.793997 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-01-05 01:18:34.794005 | orchestrator | Monday 05 January 2026 01:18:26 +0000 (0:00:00.894) 0:03:01.695 ******** 2026-01-05 01:18:34.794013 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:18:34.794113 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:18:34.794130 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:18:34.794145 | orchestrator | 2026-01-05 01:18:34.794159 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-01-05 01:18:34.794172 | orchestrator | Monday 05 January 2026 01:18:27 +0000 (0:00:00.620) 0:03:02.316 ******** 2026-01-05 01:18:34.794236 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:18:34.794252 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:18:34.794266 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:18:34.794279 | orchestrator | 2026-01-05 01:18:34.794293 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-01-05 01:18:34.794301 | orchestrator | Monday 05 January 2026 01:18:28 +0000 (0:00:01.279) 0:03:03.596 ******** 2026-01-05 01:18:34.794309 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-05 01:18:34.794317 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-05 01:18:34.794325 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-05 01:18:34.794333 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:18:34.794340 | orchestrator | 2026-01-05 01:18:34.794348 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-01-05 01:18:34.794356 | orchestrator | Monday 05 January 2026 01:18:29 +0000 (0:00:00.651) 0:03:04.247 ******** 2026-01-05 01:18:34.794364 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:18:34.794372 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:18:34.794380 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:18:34.794388 | orchestrator | 2026-01-05 01:18:34.794395 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-01-05 01:18:34.794408 | orchestrator | Monday 05 January 2026 01:18:29 +0000 (0:00:00.382) 0:03:04.630 ******** 2026-01-05 01:18:34.794423 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:18:34.794445 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:18:34.794457 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:18:34.794469 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:18:34.794482 | orchestrator | 2026-01-05 01:18:34.794494 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-01-05 01:18:34.794508 | orchestrator | Monday 05 January 2026 01:18:30 +0000 (0:00:01.135) 0:03:05.765 ******** 2026-01-05 01:18:34.794521 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 01:18:34.794535 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 01:18:34.794545 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 01:18:34.794553 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:18:34.794561 | orchestrator | 2026-01-05 01:18:34.794569 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-01-05 01:18:34.794577 | orchestrator | Monday 05 January 2026 01:18:31 +0000 (0:00:00.484) 0:03:06.250 ******** 2026-01-05 01:18:34.794585 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:18:34.794592 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:18:34.794600 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:18:34.794608 | orchestrator | 2026-01-05 01:18:34.794623 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-01-05 01:18:34.794631 | orchestrator | Monday 05 January 2026 01:18:31 +0000 (0:00:00.345) 0:03:06.595 ******** 2026-01-05 01:18:34.794639 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:18:34.794647 | orchestrator | 2026-01-05 01:18:34.794655 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-01-05 01:18:34.794663 | orchestrator | Monday 05 January 2026 01:18:32 +0000 (0:00:00.765) 0:03:07.361 ******** 2026-01-05 01:18:34.794671 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:18:34.794679 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:18:34.794687 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:18:34.794694 | orchestrator | 2026-01-05 01:18:34.794702 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-01-05 01:18:34.794710 | orchestrator | Monday 05 January 2026 01:18:32 +0000 (0:00:00.365) 0:03:07.726 ******** 2026-01-05 01:18:34.794718 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:18:34.794726 | orchestrator | 2026-01-05 01:18:34.794734 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-01-05 01:18:34.794750 | orchestrator | Monday 05 January 2026 01:18:33 +0000 (0:00:00.284) 0:03:08.011 ******** 2026-01-05 01:18:34.794758 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:18:34.794766 | orchestrator | 2026-01-05 01:18:34.794773 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-01-05 01:18:34.794781 | orchestrator | Monday 05 January 2026 01:18:33 +0000 (0:00:00.286) 0:03:08.297 ******** 2026-01-05 01:18:34.794789 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:18:34.794797 | orchestrator | 2026-01-05 01:18:34.794805 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-01-05 01:18:34.794813 | orchestrator | Monday 05 January 2026 01:18:33 +0000 (0:00:00.142) 0:03:08.439 ******** 2026-01-05 01:18:34.794821 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:18:34.794829 | orchestrator | 2026-01-05 01:18:34.794836 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-01-05 01:18:34.794844 | orchestrator | Monday 05 January 2026 01:18:33 +0000 (0:00:00.255) 0:03:08.695 ******** 2026-01-05 01:18:34.794852 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:18:34.794860 | orchestrator | 2026-01-05 01:18:34.794868 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-01-05 01:18:34.794876 | orchestrator | Monday 05 January 2026 01:18:34 +0000 (0:00:00.281) 0:03:08.977 ******** 2026-01-05 01:18:34.794884 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 01:18:34.794891 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 01:18:34.794923 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 01:18:34.794932 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:18:34.794940 | orchestrator | 2026-01-05 01:18:34.794948 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-01-05 01:18:34.794956 | orchestrator | Monday 05 January 2026 01:18:34 +0000 (0:00:00.435) 0:03:09.412 ******** 2026-01-05 01:18:34.794973 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:18:54.128509 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:18:54.128613 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:18:54.128625 | orchestrator | 2026-01-05 01:18:54.128637 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-01-05 01:18:54.128648 | orchestrator | Monday 05 January 2026 01:18:35 +0000 (0:00:00.563) 0:03:09.976 ******** 2026-01-05 01:18:54.128657 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:18:54.128666 | orchestrator | 2026-01-05 01:18:54.128676 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-01-05 01:18:54.128684 | orchestrator | Monday 05 January 2026 01:18:35 +0000 (0:00:00.277) 0:03:10.253 ******** 2026-01-05 01:18:54.128691 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:18:54.128699 | orchestrator | 2026-01-05 01:18:54.128706 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-01-05 01:18:54.128713 | orchestrator | Monday 05 January 2026 01:18:35 +0000 (0:00:00.274) 0:03:10.527 ******** 2026-01-05 01:18:54.128720 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:18:54.128728 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:18:54.128735 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:18:54.128743 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:18:54.128751 | orchestrator | 2026-01-05 01:18:54.128758 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-01-05 01:18:54.128765 | orchestrator | Monday 05 January 2026 01:18:36 +0000 (0:00:01.060) 0:03:11.587 ******** 2026-01-05 01:18:54.128773 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:18:54.128781 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:18:54.128788 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:18:54.128795 | orchestrator | 2026-01-05 01:18:54.128803 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-01-05 01:18:54.128810 | orchestrator | Monday 05 January 2026 01:18:37 +0000 (0:00:00.413) 0:03:12.001 ******** 2026-01-05 01:18:54.128837 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:18:54.128845 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:18:54.128852 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:18:54.128859 | orchestrator | 2026-01-05 01:18:54.128867 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-01-05 01:18:54.128874 | orchestrator | Monday 05 January 2026 01:18:38 +0000 (0:00:01.350) 0:03:13.352 ******** 2026-01-05 01:18:54.128881 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 01:18:54.128889 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 01:18:54.128896 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 01:18:54.128903 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:18:54.128910 | orchestrator | 2026-01-05 01:18:54.128917 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-01-05 01:18:54.128924 | orchestrator | Monday 05 January 2026 01:18:39 +0000 (0:00:00.932) 0:03:14.285 ******** 2026-01-05 01:18:54.128931 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:18:54.128939 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:18:54.128946 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:18:54.128953 | orchestrator | 2026-01-05 01:18:54.128973 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-01-05 01:18:54.128980 | orchestrator | Monday 05 January 2026 01:18:39 +0000 (0:00:00.394) 0:03:14.679 ******** 2026-01-05 01:18:54.128988 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:18:54.128995 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:18:54.129002 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:18:54.129010 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:18:54.129017 | orchestrator | 2026-01-05 01:18:54.129024 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-01-05 01:18:54.129031 | orchestrator | Monday 05 January 2026 01:18:41 +0000 (0:00:01.232) 0:03:15.911 ******** 2026-01-05 01:18:54.129038 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:18:54.129045 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:18:54.129052 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:18:54.129107 | orchestrator | 2026-01-05 01:18:54.129119 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-01-05 01:18:54.129127 | orchestrator | Monday 05 January 2026 01:18:41 +0000 (0:00:00.337) 0:03:16.249 ******** 2026-01-05 01:18:54.129134 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:18:54.129142 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:18:54.129149 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:18:54.129156 | orchestrator | 2026-01-05 01:18:54.129163 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-01-05 01:18:54.129170 | orchestrator | Monday 05 January 2026 01:18:43 +0000 (0:00:01.648) 0:03:17.897 ******** 2026-01-05 01:18:54.129178 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 01:18:54.129185 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 01:18:54.129192 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 01:18:54.129200 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:18:54.129207 | orchestrator | 2026-01-05 01:18:54.129214 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-01-05 01:18:54.129221 | orchestrator | Monday 05 January 2026 01:18:43 +0000 (0:00:00.670) 0:03:18.568 ******** 2026-01-05 01:18:54.129229 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:18:54.129236 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:18:54.129243 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:18:54.129250 | orchestrator | 2026-01-05 01:18:54.129257 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-01-05 01:18:54.129265 | orchestrator | Monday 05 January 2026 01:18:44 +0000 (0:00:00.347) 0:03:18.915 ******** 2026-01-05 01:18:54.129272 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:18:54.129286 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:18:54.129293 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:18:54.129301 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:18:54.129308 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:18:54.129315 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:18:54.129323 | orchestrator | 2026-01-05 01:18:54.129345 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-01-05 01:18:54.129352 | orchestrator | Monday 05 January 2026 01:18:44 +0000 (0:00:00.875) 0:03:19.790 ******** 2026-01-05 01:18:54.129360 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:18:54.129367 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:18:54.129375 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:18:54.129382 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:18:54.129390 | orchestrator | 2026-01-05 01:18:54.129397 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-01-05 01:18:54.129404 | orchestrator | Monday 05 January 2026 01:18:45 +0000 (0:00:00.908) 0:03:20.699 ******** 2026-01-05 01:18:54.129412 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:18:54.129419 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:18:54.129426 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:18:54.129434 | orchestrator | 2026-01-05 01:18:54.129441 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-01-05 01:18:54.129448 | orchestrator | Monday 05 January 2026 01:18:46 +0000 (0:00:00.576) 0:03:21.276 ******** 2026-01-05 01:18:54.129456 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:18:54.129463 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:18:54.129470 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:18:54.129478 | orchestrator | 2026-01-05 01:18:54.129485 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-01-05 01:18:54.129492 | orchestrator | Monday 05 January 2026 01:18:47 +0000 (0:00:01.347) 0:03:22.623 ******** 2026-01-05 01:18:54.129500 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-05 01:18:54.129507 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-05 01:18:54.129514 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-05 01:18:54.129521 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:18:54.129529 | orchestrator | 2026-01-05 01:18:54.129536 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-01-05 01:18:54.129543 | orchestrator | Monday 05 January 2026 01:18:48 +0000 (0:00:00.636) 0:03:23.260 ******** 2026-01-05 01:18:54.129551 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:18:54.129558 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:18:54.129565 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:18:54.129573 | orchestrator | 2026-01-05 01:18:54.129580 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-01-05 01:18:54.129587 | orchestrator | 2026-01-05 01:18:54.129594 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-05 01:18:54.129602 | orchestrator | Monday 05 January 2026 01:18:49 +0000 (0:00:00.590) 0:03:23.850 ******** 2026-01-05 01:18:54.129610 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:18:54.129620 | orchestrator | 2026-01-05 01:18:54.129628 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-05 01:18:54.129635 | orchestrator | Monday 05 January 2026 01:18:49 +0000 (0:00:00.812) 0:03:24.663 ******** 2026-01-05 01:18:54.129648 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:18:54.129656 | orchestrator | 2026-01-05 01:18:54.129663 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-05 01:18:54.129671 | orchestrator | Monday 05 January 2026 01:18:50 +0000 (0:00:00.765) 0:03:25.429 ******** 2026-01-05 01:18:54.129684 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:18:54.129691 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:18:54.129699 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:18:54.129706 | orchestrator | 2026-01-05 01:18:54.129713 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-05 01:18:54.129720 | orchestrator | Monday 05 January 2026 01:18:51 +0000 (0:00:00.810) 0:03:26.239 ******** 2026-01-05 01:18:54.129728 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:18:54.129735 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:18:54.129742 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:18:54.129750 | orchestrator | 2026-01-05 01:18:54.129757 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-05 01:18:54.129765 | orchestrator | Monday 05 January 2026 01:18:51 +0000 (0:00:00.329) 0:03:26.568 ******** 2026-01-05 01:18:54.129772 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:18:54.129780 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:18:54.129787 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:18:54.129794 | orchestrator | 2026-01-05 01:18:54.129801 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-05 01:18:54.129809 | orchestrator | Monday 05 January 2026 01:18:52 +0000 (0:00:00.325) 0:03:26.894 ******** 2026-01-05 01:18:54.129816 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:18:54.129823 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:18:54.129831 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:18:54.129838 | orchestrator | 2026-01-05 01:18:54.129845 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-05 01:18:54.129852 | orchestrator | Monday 05 January 2026 01:18:52 +0000 (0:00:00.572) 0:03:27.467 ******** 2026-01-05 01:18:54.129860 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:18:54.129867 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:18:54.129874 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:18:54.129882 | orchestrator | 2026-01-05 01:18:54.129889 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-05 01:18:54.129896 | orchestrator | Monday 05 January 2026 01:18:53 +0000 (0:00:00.819) 0:03:28.287 ******** 2026-01-05 01:18:54.129904 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:18:54.129911 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:18:54.129918 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:18:54.129926 | orchestrator | 2026-01-05 01:18:54.129933 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-05 01:18:54.129940 | orchestrator | Monday 05 January 2026 01:18:53 +0000 (0:00:00.308) 0:03:28.596 ******** 2026-01-05 01:18:54.129948 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:18:54.129955 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:18:54.129967 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:19:17.093987 | orchestrator | 2026-01-05 01:19:17.094103 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-05 01:19:17.094113 | orchestrator | Monday 05 January 2026 01:18:54 +0000 (0:00:00.343) 0:03:28.939 ******** 2026-01-05 01:19:17.094117 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:19:17.094124 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:19:17.094128 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:19:17.094132 | orchestrator | 2026-01-05 01:19:17.094136 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-05 01:19:17.094141 | orchestrator | Monday 05 January 2026 01:18:55 +0000 (0:00:01.015) 0:03:29.955 ******** 2026-01-05 01:19:17.094145 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:19:17.094149 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:19:17.094153 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:19:17.094156 | orchestrator | 2026-01-05 01:19:17.094160 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-05 01:19:17.094164 | orchestrator | Monday 05 January 2026 01:18:55 +0000 (0:00:00.776) 0:03:30.731 ******** 2026-01-05 01:19:17.094168 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:19:17.094173 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:19:17.094193 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:19:17.094197 | orchestrator | 2026-01-05 01:19:17.094201 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-05 01:19:17.094205 | orchestrator | Monday 05 January 2026 01:18:56 +0000 (0:00:00.329) 0:03:31.060 ******** 2026-01-05 01:19:17.094209 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:19:17.094212 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:19:17.094216 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:19:17.094220 | orchestrator | 2026-01-05 01:19:17.094223 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-05 01:19:17.094227 | orchestrator | Monday 05 January 2026 01:18:56 +0000 (0:00:00.331) 0:03:31.392 ******** 2026-01-05 01:19:17.094231 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:19:17.094235 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:19:17.094239 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:19:17.094242 | orchestrator | 2026-01-05 01:19:17.094246 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-05 01:19:17.094289 | orchestrator | Monday 05 January 2026 01:18:57 +0000 (0:00:00.624) 0:03:32.017 ******** 2026-01-05 01:19:17.094296 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:19:17.094302 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:19:17.094306 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:19:17.094310 | orchestrator | 2026-01-05 01:19:17.094314 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-05 01:19:17.094318 | orchestrator | Monday 05 January 2026 01:18:57 +0000 (0:00:00.346) 0:03:32.363 ******** 2026-01-05 01:19:17.094322 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:19:17.094325 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:19:17.094329 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:19:17.094333 | orchestrator | 2026-01-05 01:19:17.094337 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-05 01:19:17.094341 | orchestrator | Monday 05 January 2026 01:18:57 +0000 (0:00:00.341) 0:03:32.704 ******** 2026-01-05 01:19:17.094354 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:19:17.094360 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:19:17.094366 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:19:17.094372 | orchestrator | 2026-01-05 01:19:17.094379 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-05 01:19:17.094385 | orchestrator | Monday 05 January 2026 01:18:58 +0000 (0:00:00.355) 0:03:33.060 ******** 2026-01-05 01:19:17.094391 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:19:17.094397 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:19:17.094402 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:19:17.094409 | orchestrator | 2026-01-05 01:19:17.094415 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-05 01:19:17.094421 | orchestrator | Monday 05 January 2026 01:18:58 +0000 (0:00:00.586) 0:03:33.646 ******** 2026-01-05 01:19:17.094428 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:19:17.094434 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:19:17.094439 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:19:17.094443 | orchestrator | 2026-01-05 01:19:17.094447 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-05 01:19:17.094451 | orchestrator | Monday 05 January 2026 01:18:59 +0000 (0:00:00.375) 0:03:34.021 ******** 2026-01-05 01:19:17.094454 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:19:17.094458 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:19:17.094462 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:19:17.094465 | orchestrator | 2026-01-05 01:19:17.094469 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-05 01:19:17.094473 | orchestrator | Monday 05 January 2026 01:18:59 +0000 (0:00:00.386) 0:03:34.408 ******** 2026-01-05 01:19:17.094477 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:19:17.094480 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:19:17.094484 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:19:17.094493 | orchestrator | 2026-01-05 01:19:17.094497 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-01-05 01:19:17.094500 | orchestrator | Monday 05 January 2026 01:19:00 +0000 (0:00:00.814) 0:03:35.223 ******** 2026-01-05 01:19:17.094504 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:19:17.094508 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:19:17.094511 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:19:17.094515 | orchestrator | 2026-01-05 01:19:17.094519 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-01-05 01:19:17.094523 | orchestrator | Monday 05 January 2026 01:19:00 +0000 (0:00:00.374) 0:03:35.598 ******** 2026-01-05 01:19:17.094527 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:19:17.094531 | orchestrator | 2026-01-05 01:19:17.094535 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-01-05 01:19:17.094539 | orchestrator | Monday 05 January 2026 01:19:01 +0000 (0:00:00.633) 0:03:36.231 ******** 2026-01-05 01:19:17.094542 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:19:17.094546 | orchestrator | 2026-01-05 01:19:17.094550 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-01-05 01:19:17.094567 | orchestrator | Monday 05 January 2026 01:19:01 +0000 (0:00:00.187) 0:03:36.419 ******** 2026-01-05 01:19:17.094571 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-05 01:19:17.094575 | orchestrator | 2026-01-05 01:19:17.094579 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-01-05 01:19:17.094582 | orchestrator | Monday 05 January 2026 01:19:02 +0000 (0:00:01.066) 0:03:37.486 ******** 2026-01-05 01:19:17.094586 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:19:17.094590 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:19:17.094594 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:19:17.094597 | orchestrator | 2026-01-05 01:19:17.094601 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-01-05 01:19:17.094605 | orchestrator | Monday 05 January 2026 01:19:03 +0000 (0:00:00.611) 0:03:38.097 ******** 2026-01-05 01:19:17.094608 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:19:17.094612 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:19:17.094616 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:19:17.094627 | orchestrator | 2026-01-05 01:19:17.094630 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-01-05 01:19:17.094634 | orchestrator | Monday 05 January 2026 01:19:03 +0000 (0:00:00.413) 0:03:38.510 ******** 2026-01-05 01:19:17.094638 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:19:17.094642 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:19:17.094646 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:19:17.094650 | orchestrator | 2026-01-05 01:19:17.094653 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-01-05 01:19:17.094657 | orchestrator | Monday 05 January 2026 01:19:04 +0000 (0:00:01.234) 0:03:39.745 ******** 2026-01-05 01:19:17.094661 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:19:17.094665 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:19:17.094668 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:19:17.094672 | orchestrator | 2026-01-05 01:19:17.094676 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-01-05 01:19:17.094680 | orchestrator | Monday 05 January 2026 01:19:05 +0000 (0:00:00.793) 0:03:40.538 ******** 2026-01-05 01:19:17.094683 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:19:17.094687 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:19:17.094691 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:19:17.094695 | orchestrator | 2026-01-05 01:19:17.094698 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-01-05 01:19:17.094702 | orchestrator | Monday 05 January 2026 01:19:06 +0000 (0:00:01.018) 0:03:41.557 ******** 2026-01-05 01:19:17.094706 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:19:17.094710 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:19:17.094713 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:19:17.094721 | orchestrator | 2026-01-05 01:19:17.094724 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-01-05 01:19:17.094728 | orchestrator | Monday 05 January 2026 01:19:07 +0000 (0:00:00.709) 0:03:42.266 ******** 2026-01-05 01:19:17.094732 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:19:17.094736 | orchestrator | 2026-01-05 01:19:17.094739 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-01-05 01:19:17.094743 | orchestrator | Monday 05 January 2026 01:19:08 +0000 (0:00:01.334) 0:03:43.600 ******** 2026-01-05 01:19:17.094747 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:19:17.094751 | orchestrator | 2026-01-05 01:19:17.094758 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-01-05 01:19:17.094762 | orchestrator | Monday 05 January 2026 01:19:09 +0000 (0:00:00.751) 0:03:44.352 ******** 2026-01-05 01:19:17.094766 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-05 01:19:17.094770 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 01:19:17.094774 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 01:19:17.094778 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-05 01:19:17.094781 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-01-05 01:19:17.094786 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-05 01:19:17.094789 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-05 01:19:17.094793 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-01-05 01:19:17.094797 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-05 01:19:17.094801 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-01-05 01:19:17.094804 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-01-05 01:19:17.094808 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-01-05 01:19:17.094812 | orchestrator | 2026-01-05 01:19:17.094816 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-01-05 01:19:17.094820 | orchestrator | Monday 05 January 2026 01:19:13 +0000 (0:00:03.605) 0:03:47.958 ******** 2026-01-05 01:19:17.094823 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:19:17.094827 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:19:17.094831 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:19:17.094834 | orchestrator | 2026-01-05 01:19:17.094838 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-01-05 01:19:17.094842 | orchestrator | Monday 05 January 2026 01:19:14 +0000 (0:00:01.498) 0:03:49.457 ******** 2026-01-05 01:19:17.094846 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:19:17.094849 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:19:17.094853 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:19:17.094857 | orchestrator | 2026-01-05 01:19:17.094861 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-01-05 01:19:17.094864 | orchestrator | Monday 05 January 2026 01:19:14 +0000 (0:00:00.345) 0:03:49.802 ******** 2026-01-05 01:19:17.094870 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:19:17.094877 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:19:17.094882 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:19:17.094888 | orchestrator | 2026-01-05 01:19:17.094894 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-01-05 01:19:17.094899 | orchestrator | Monday 05 January 2026 01:19:15 +0000 (0:00:00.361) 0:03:50.164 ******** 2026-01-05 01:19:17.094906 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:19:17.094912 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:19:17.094917 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:19:17.094924 | orchestrator | 2026-01-05 01:19:17.094935 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-01-05 01:20:20.376430 | orchestrator | Monday 05 January 2026 01:19:17 +0000 (0:00:01.740) 0:03:51.905 ******** 2026-01-05 01:20:20.376548 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:20:20.376593 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:20:20.376605 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:20:20.376616 | orchestrator | 2026-01-05 01:20:20.376628 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-01-05 01:20:20.376639 | orchestrator | Monday 05 January 2026 01:19:18 +0000 (0:00:01.624) 0:03:53.530 ******** 2026-01-05 01:20:20.376649 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:20:20.376660 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:20:20.376670 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:20:20.376680 | orchestrator | 2026-01-05 01:20:20.376691 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-01-05 01:20:20.376701 | orchestrator | Monday 05 January 2026 01:19:19 +0000 (0:00:00.321) 0:03:53.851 ******** 2026-01-05 01:20:20.376713 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:20:20.376745 | orchestrator | 2026-01-05 01:20:20.376757 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-01-05 01:20:20.376767 | orchestrator | Monday 05 January 2026 01:19:19 +0000 (0:00:00.582) 0:03:54.433 ******** 2026-01-05 01:20:20.376778 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:20:20.376788 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:20:20.376797 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:20:20.376807 | orchestrator | 2026-01-05 01:20:20.376817 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-01-05 01:20:20.376827 | orchestrator | Monday 05 January 2026 01:19:20 +0000 (0:00:00.555) 0:03:54.989 ******** 2026-01-05 01:20:20.376838 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:20:20.376848 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:20:20.376859 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:20:20.376869 | orchestrator | 2026-01-05 01:20:20.376880 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-01-05 01:20:20.376890 | orchestrator | Monday 05 January 2026 01:19:20 +0000 (0:00:00.324) 0:03:55.314 ******** 2026-01-05 01:20:20.376901 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:20:20.376914 | orchestrator | 2026-01-05 01:20:20.376924 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-01-05 01:20:20.376934 | orchestrator | Monday 05 January 2026 01:19:21 +0000 (0:00:00.550) 0:03:55.865 ******** 2026-01-05 01:20:20.376945 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:20:20.376956 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:20:20.376967 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:20:20.376979 | orchestrator | 2026-01-05 01:20:20.376990 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-01-05 01:20:20.377018 | orchestrator | Monday 05 January 2026 01:19:22 +0000 (0:00:01.834) 0:03:57.699 ******** 2026-01-05 01:20:20.377029 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:20:20.377036 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:20:20.377043 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:20:20.377051 | orchestrator | 2026-01-05 01:20:20.377058 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-01-05 01:20:20.377065 | orchestrator | Monday 05 January 2026 01:19:24 +0000 (0:00:01.241) 0:03:58.941 ******** 2026-01-05 01:20:20.377075 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:20:20.377086 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:20:20.377097 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:20:20.377107 | orchestrator | 2026-01-05 01:20:20.377118 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-01-05 01:20:20.377129 | orchestrator | Monday 05 January 2026 01:19:25 +0000 (0:00:01.821) 0:04:00.762 ******** 2026-01-05 01:20:20.377140 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:20:20.377151 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:20:20.377161 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:20:20.377194 | orchestrator | 2026-01-05 01:20:20.377231 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-01-05 01:20:20.377245 | orchestrator | Monday 05 January 2026 01:19:28 +0000 (0:00:02.101) 0:04:02.864 ******** 2026-01-05 01:20:20.377257 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:20:20.377267 | orchestrator | 2026-01-05 01:20:20.377278 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-01-05 01:20:20.377288 | orchestrator | Monday 05 January 2026 01:19:28 +0000 (0:00:00.842) 0:04:03.707 ******** 2026-01-05 01:20:20.377298 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-01-05 01:20:20.377309 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:20:20.377320 | orchestrator | 2026-01-05 01:20:20.377330 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-01-05 01:20:20.377340 | orchestrator | Monday 05 January 2026 01:19:51 +0000 (0:00:22.118) 0:04:25.826 ******** 2026-01-05 01:20:20.377351 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:20:20.377361 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:20:20.377371 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:20:20.377381 | orchestrator | 2026-01-05 01:20:20.377391 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-01-05 01:20:20.377402 | orchestrator | Monday 05 January 2026 01:20:00 +0000 (0:00:09.343) 0:04:35.169 ******** 2026-01-05 01:20:20.377412 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:20:20.377423 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:20:20.377433 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:20:20.377443 | orchestrator | 2026-01-05 01:20:20.377453 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-01-05 01:20:20.377464 | orchestrator | Monday 05 January 2026 01:20:00 +0000 (0:00:00.607) 0:04:35.776 ******** 2026-01-05 01:20:20.377500 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__eb7a35bd3580008ca209fae60040777f899faae7'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-01-05 01:20:20.377514 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__eb7a35bd3580008ca209fae60040777f899faae7'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-01-05 01:20:20.377526 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__eb7a35bd3580008ca209fae60040777f899faae7'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-01-05 01:20:20.377540 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__eb7a35bd3580008ca209fae60040777f899faae7'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-01-05 01:20:20.377551 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__eb7a35bd3580008ca209fae60040777f899faae7'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-01-05 01:20:20.377570 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__eb7a35bd3580008ca209fae60040777f899faae7'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__eb7a35bd3580008ca209fae60040777f899faae7'}])  2026-01-05 01:20:20.377593 | orchestrator | 2026-01-05 01:20:20.377605 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-05 01:20:20.377615 | orchestrator | Monday 05 January 2026 01:20:16 +0000 (0:00:15.727) 0:04:51.504 ******** 2026-01-05 01:20:20.377626 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:20:20.377636 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:20:20.377647 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:20:20.377658 | orchestrator | 2026-01-05 01:20:20.377670 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-01-05 01:20:20.377680 | orchestrator | Monday 05 January 2026 01:20:17 +0000 (0:00:00.357) 0:04:51.862 ******** 2026-01-05 01:20:20.377691 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:20:20.377702 | orchestrator | 2026-01-05 01:20:20.377713 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-01-05 01:20:20.377782 | orchestrator | Monday 05 January 2026 01:20:17 +0000 (0:00:00.826) 0:04:52.688 ******** 2026-01-05 01:20:20.377793 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:20:20.377804 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:20:20.377814 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:20:20.377824 | orchestrator | 2026-01-05 01:20:20.377834 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-01-05 01:20:20.377845 | orchestrator | Monday 05 January 2026 01:20:18 +0000 (0:00:00.345) 0:04:53.033 ******** 2026-01-05 01:20:20.377855 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:20:20.377865 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:20:20.377875 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:20:20.377885 | orchestrator | 2026-01-05 01:20:20.377895 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-01-05 01:20:20.377906 | orchestrator | Monday 05 January 2026 01:20:18 +0000 (0:00:00.355) 0:04:53.389 ******** 2026-01-05 01:20:20.377916 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-05 01:20:20.377926 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-05 01:20:20.377937 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-05 01:20:20.377946 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:20:20.377956 | orchestrator | 2026-01-05 01:20:20.377966 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-01-05 01:20:20.377977 | orchestrator | Monday 05 January 2026 01:20:19 +0000 (0:00:01.182) 0:04:54.572 ******** 2026-01-05 01:20:20.377987 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:20:20.378056 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:20:20.378072 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:20:20.378083 | orchestrator | 2026-01-05 01:20:20.378094 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-01-05 01:20:20.378105 | orchestrator | 2026-01-05 01:20:20.378128 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-05 01:20:48.930337 | orchestrator | Monday 05 January 2026 01:20:20 +0000 (0:00:00.612) 0:04:55.184 ******** 2026-01-05 01:20:48.930424 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:20:48.930432 | orchestrator | 2026-01-05 01:20:48.930437 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-05 01:20:48.930441 | orchestrator | Monday 05 January 2026 01:20:21 +0000 (0:00:00.758) 0:04:55.943 ******** 2026-01-05 01:20:48.930446 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:20:48.930466 | orchestrator | 2026-01-05 01:20:48.930470 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-05 01:20:48.930474 | orchestrator | Monday 05 January 2026 01:20:21 +0000 (0:00:00.589) 0:04:56.533 ******** 2026-01-05 01:20:48.930478 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:20:48.930484 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:20:48.930488 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:20:48.930492 | orchestrator | 2026-01-05 01:20:48.930495 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-05 01:20:48.930499 | orchestrator | Monday 05 January 2026 01:20:22 +0000 (0:00:00.749) 0:04:57.282 ******** 2026-01-05 01:20:48.930504 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:20:48.930508 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:20:48.930512 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:20:48.930516 | orchestrator | 2026-01-05 01:20:48.930520 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-05 01:20:48.930524 | orchestrator | Monday 05 January 2026 01:20:23 +0000 (0:00:00.566) 0:04:57.849 ******** 2026-01-05 01:20:48.930528 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:20:48.930531 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:20:48.930535 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:20:48.930539 | orchestrator | 2026-01-05 01:20:48.930543 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-05 01:20:48.930546 | orchestrator | Monday 05 January 2026 01:20:23 +0000 (0:00:00.331) 0:04:58.180 ******** 2026-01-05 01:20:48.930550 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:20:48.930554 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:20:48.930558 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:20:48.930562 | orchestrator | 2026-01-05 01:20:48.930565 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-05 01:20:48.930569 | orchestrator | Monday 05 January 2026 01:20:23 +0000 (0:00:00.350) 0:04:58.530 ******** 2026-01-05 01:20:48.930573 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:20:48.930577 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:20:48.930580 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:20:48.930584 | orchestrator | 2026-01-05 01:20:48.930588 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-05 01:20:48.930603 | orchestrator | Monday 05 January 2026 01:20:24 +0000 (0:00:00.751) 0:04:59.282 ******** 2026-01-05 01:20:48.930606 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:20:48.930610 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:20:48.930614 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:20:48.930619 | orchestrator | 2026-01-05 01:20:48.930622 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-05 01:20:48.930626 | orchestrator | Monday 05 January 2026 01:20:25 +0000 (0:00:00.610) 0:04:59.892 ******** 2026-01-05 01:20:48.930630 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:20:48.930634 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:20:48.930637 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:20:48.930641 | orchestrator | 2026-01-05 01:20:48.930645 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-05 01:20:48.930649 | orchestrator | Monday 05 January 2026 01:20:25 +0000 (0:00:00.332) 0:05:00.225 ******** 2026-01-05 01:20:48.930652 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:20:48.930656 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:20:48.930660 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:20:48.930663 | orchestrator | 2026-01-05 01:20:48.930667 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-05 01:20:48.930671 | orchestrator | Monday 05 January 2026 01:20:26 +0000 (0:00:00.784) 0:05:01.010 ******** 2026-01-05 01:20:48.930675 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:20:48.930678 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:20:48.930682 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:20:48.930686 | orchestrator | 2026-01-05 01:20:48.930689 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-05 01:20:48.930698 | orchestrator | Monday 05 January 2026 01:20:26 +0000 (0:00:00.722) 0:05:01.733 ******** 2026-01-05 01:20:48.930702 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:20:48.930705 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:20:48.930709 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:20:48.930713 | orchestrator | 2026-01-05 01:20:48.930717 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-05 01:20:48.930720 | orchestrator | Monday 05 January 2026 01:20:27 +0000 (0:00:00.581) 0:05:02.314 ******** 2026-01-05 01:20:48.930724 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:20:48.930728 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:20:48.930732 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:20:48.930735 | orchestrator | 2026-01-05 01:20:48.930739 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-05 01:20:48.930743 | orchestrator | Monday 05 January 2026 01:20:27 +0000 (0:00:00.372) 0:05:02.686 ******** 2026-01-05 01:20:48.930747 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:20:48.930750 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:20:48.930754 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:20:48.930758 | orchestrator | 2026-01-05 01:20:48.930762 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-05 01:20:48.930765 | orchestrator | Monday 05 January 2026 01:20:28 +0000 (0:00:00.315) 0:05:03.002 ******** 2026-01-05 01:20:48.930769 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:20:48.930773 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:20:48.930776 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:20:48.930780 | orchestrator | 2026-01-05 01:20:48.930793 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-05 01:20:48.930798 | orchestrator | Monday 05 January 2026 01:20:28 +0000 (0:00:00.317) 0:05:03.320 ******** 2026-01-05 01:20:48.930801 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:20:48.930805 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:20:48.930809 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:20:48.930813 | orchestrator | 2026-01-05 01:20:48.930817 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-05 01:20:48.930820 | orchestrator | Monday 05 January 2026 01:20:29 +0000 (0:00:00.608) 0:05:03.929 ******** 2026-01-05 01:20:48.930824 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:20:48.930828 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:20:48.930835 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:20:48.930840 | orchestrator | 2026-01-05 01:20:48.930846 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-05 01:20:48.930854 | orchestrator | Monday 05 January 2026 01:20:29 +0000 (0:00:00.380) 0:05:04.309 ******** 2026-01-05 01:20:48.930863 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:20:48.930869 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:20:48.930875 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:20:48.930881 | orchestrator | 2026-01-05 01:20:48.930887 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-05 01:20:48.930892 | orchestrator | Monday 05 January 2026 01:20:29 +0000 (0:00:00.335) 0:05:04.645 ******** 2026-01-05 01:20:48.930898 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:20:48.930904 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:20:48.930935 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:20:48.930941 | orchestrator | 2026-01-05 01:20:48.930947 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-05 01:20:48.930953 | orchestrator | Monday 05 January 2026 01:20:30 +0000 (0:00:00.350) 0:05:04.996 ******** 2026-01-05 01:20:48.930959 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:20:48.930966 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:20:48.930972 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:20:48.930977 | orchestrator | 2026-01-05 01:20:48.930984 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-05 01:20:48.930990 | orchestrator | Monday 05 January 2026 01:20:30 +0000 (0:00:00.646) 0:05:05.643 ******** 2026-01-05 01:20:48.931002 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:20:48.931008 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:20:48.931015 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:20:48.931022 | orchestrator | 2026-01-05 01:20:48.931027 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-01-05 01:20:48.931034 | orchestrator | Monday 05 January 2026 01:20:31 +0000 (0:00:00.609) 0:05:06.252 ******** 2026-01-05 01:20:48.931040 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-05 01:20:48.931047 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-05 01:20:48.931055 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-05 01:20:48.931061 | orchestrator | 2026-01-05 01:20:48.931072 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-01-05 01:20:48.931079 | orchestrator | Monday 05 January 2026 01:20:32 +0000 (0:00:00.929) 0:05:07.182 ******** 2026-01-05 01:20:48.931085 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:20:48.931092 | orchestrator | 2026-01-05 01:20:48.931098 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-01-05 01:20:48.931103 | orchestrator | Monday 05 January 2026 01:20:33 +0000 (0:00:00.879) 0:05:08.062 ******** 2026-01-05 01:20:48.931109 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:20:48.931116 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:20:48.931123 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:20:48.931129 | orchestrator | 2026-01-05 01:20:48.931135 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-01-05 01:20:48.931142 | orchestrator | Monday 05 January 2026 01:20:34 +0000 (0:00:00.769) 0:05:08.831 ******** 2026-01-05 01:20:48.931148 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:20:48.931155 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:20:48.931162 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:20:48.931168 | orchestrator | 2026-01-05 01:20:48.931174 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-01-05 01:20:48.931180 | orchestrator | Monday 05 January 2026 01:20:34 +0000 (0:00:00.334) 0:05:09.166 ******** 2026-01-05 01:20:48.931186 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-05 01:20:48.931191 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-05 01:20:48.931196 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-05 01:20:48.931200 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-01-05 01:20:48.931205 | orchestrator | 2026-01-05 01:20:48.931210 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-01-05 01:20:48.931217 | orchestrator | Monday 05 January 2026 01:20:45 +0000 (0:00:11.380) 0:05:20.547 ******** 2026-01-05 01:20:48.931223 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:20:48.931230 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:20:48.931236 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:20:48.931242 | orchestrator | 2026-01-05 01:20:48.931248 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-01-05 01:20:48.931254 | orchestrator | Monday 05 January 2026 01:20:46 +0000 (0:00:00.976) 0:05:21.524 ******** 2026-01-05 01:20:48.931260 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-01-05 01:20:48.931266 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-05 01:20:48.931273 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-05 01:20:48.931279 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-01-05 01:20:48.931286 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 01:20:48.931292 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 01:20:48.931298 | orchestrator | 2026-01-05 01:20:48.931305 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-01-05 01:20:48.931317 | orchestrator | Monday 05 January 2026 01:20:48 +0000 (0:00:02.210) 0:05:23.734 ******** 2026-01-05 01:21:57.508533 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-01-05 01:21:57.508682 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-05 01:21:57.508701 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-05 01:21:57.508715 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-05 01:21:57.508727 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-01-05 01:21:57.508740 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-01-05 01:21:57.508752 | orchestrator | 2026-01-05 01:21:57.508765 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-01-05 01:21:57.508780 | orchestrator | Monday 05 January 2026 01:20:50 +0000 (0:00:01.315) 0:05:25.050 ******** 2026-01-05 01:21:57.508791 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:21:57.508803 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:21:57.508815 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:21:57.508826 | orchestrator | 2026-01-05 01:21:57.508838 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-01-05 01:21:57.508849 | orchestrator | Monday 05 January 2026 01:20:50 +0000 (0:00:00.721) 0:05:25.771 ******** 2026-01-05 01:21:57.508861 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:21:57.508875 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:21:57.508887 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:21:57.508899 | orchestrator | 2026-01-05 01:21:57.508912 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-01-05 01:21:57.508923 | orchestrator | Monday 05 January 2026 01:20:51 +0000 (0:00:00.624) 0:05:26.396 ******** 2026-01-05 01:21:57.508934 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:21:57.508946 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:21:57.508958 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:21:57.508969 | orchestrator | 2026-01-05 01:21:57.508981 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-01-05 01:21:57.508992 | orchestrator | Monday 05 January 2026 01:20:51 +0000 (0:00:00.331) 0:05:26.727 ******** 2026-01-05 01:21:57.509004 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:21:57.509016 | orchestrator | 2026-01-05 01:21:57.509027 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-01-05 01:21:57.509038 | orchestrator | Monday 05 January 2026 01:20:52 +0000 (0:00:00.599) 0:05:27.327 ******** 2026-01-05 01:21:57.509051 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:21:57.509064 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:21:57.509077 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:21:57.509090 | orchestrator | 2026-01-05 01:21:57.509103 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-01-05 01:21:57.509117 | orchestrator | Monday 05 January 2026 01:20:53 +0000 (0:00:00.602) 0:05:27.929 ******** 2026-01-05 01:21:57.509129 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:21:57.509142 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:21:57.509154 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:21:57.509186 | orchestrator | 2026-01-05 01:21:57.509200 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-01-05 01:21:57.509212 | orchestrator | Monday 05 January 2026 01:20:53 +0000 (0:00:00.350) 0:05:28.279 ******** 2026-01-05 01:21:57.509226 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:21:57.509241 | orchestrator | 2026-01-05 01:21:57.509254 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-01-05 01:21:57.509266 | orchestrator | Monday 05 January 2026 01:20:54 +0000 (0:00:00.582) 0:05:28.861 ******** 2026-01-05 01:21:57.509278 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:21:57.509290 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:21:57.509303 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:21:57.509315 | orchestrator | 2026-01-05 01:21:57.509385 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-01-05 01:21:57.509402 | orchestrator | Monday 05 January 2026 01:20:55 +0000 (0:00:01.601) 0:05:30.462 ******** 2026-01-05 01:21:57.509414 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:21:57.509425 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:21:57.509435 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:21:57.509447 | orchestrator | 2026-01-05 01:21:57.509460 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-01-05 01:21:57.509472 | orchestrator | Monday 05 January 2026 01:20:56 +0000 (0:00:01.231) 0:05:31.694 ******** 2026-01-05 01:21:57.509484 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:21:57.509497 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:21:57.509509 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:21:57.509521 | orchestrator | 2026-01-05 01:21:57.509533 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-01-05 01:21:57.509545 | orchestrator | Monday 05 January 2026 01:20:58 +0000 (0:00:01.808) 0:05:33.502 ******** 2026-01-05 01:21:57.509557 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:21:57.509568 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:21:57.509579 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:21:57.509590 | orchestrator | 2026-01-05 01:21:57.509602 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-01-05 01:21:57.509613 | orchestrator | Monday 05 January 2026 01:21:00 +0000 (0:00:02.093) 0:05:35.595 ******** 2026-01-05 01:21:57.509624 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:21:57.509636 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:21:57.509646 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-01-05 01:21:57.509657 | orchestrator | 2026-01-05 01:21:57.509668 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-01-05 01:21:57.509680 | orchestrator | Monday 05 January 2026 01:21:01 +0000 (0:00:00.759) 0:05:36.355 ******** 2026-01-05 01:21:57.509693 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-01-05 01:21:57.509706 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-01-05 01:21:57.509744 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-01-05 01:21:57.509753 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-01-05 01:21:57.509761 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-01-05 01:21:57.509768 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (25 retries left). 2026-01-05 01:21:57.509775 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-01-05 01:21:57.509783 | orchestrator | 2026-01-05 01:21:57.509790 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-01-05 01:21:57.509797 | orchestrator | Monday 05 January 2026 01:21:38 +0000 (0:00:36.621) 0:06:12.976 ******** 2026-01-05 01:21:57.509804 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-01-05 01:21:57.509811 | orchestrator | 2026-01-05 01:21:57.509818 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-01-05 01:21:57.509825 | orchestrator | Monday 05 January 2026 01:21:39 +0000 (0:00:01.410) 0:06:14.387 ******** 2026-01-05 01:21:57.509833 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:21:57.509840 | orchestrator | 2026-01-05 01:21:57.509847 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-01-05 01:21:57.509854 | orchestrator | Monday 05 January 2026 01:21:39 +0000 (0:00:00.335) 0:06:14.722 ******** 2026-01-05 01:21:57.509861 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:21:57.509869 | orchestrator | 2026-01-05 01:21:57.509876 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-01-05 01:21:57.509894 | orchestrator | Monday 05 January 2026 01:21:40 +0000 (0:00:00.171) 0:06:14.894 ******** 2026-01-05 01:21:57.509901 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-01-05 01:21:57.509908 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-01-05 01:21:57.509915 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-01-05 01:21:57.509923 | orchestrator | 2026-01-05 01:21:57.509930 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-01-05 01:21:57.509937 | orchestrator | Monday 05 January 2026 01:21:46 +0000 (0:00:06.460) 0:06:21.354 ******** 2026-01-05 01:21:57.509944 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-01-05 01:21:57.509951 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-01-05 01:21:57.509958 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-01-05 01:21:57.509973 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-01-05 01:21:57.509981 | orchestrator | 2026-01-05 01:21:57.509988 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-05 01:21:57.509996 | orchestrator | Monday 05 January 2026 01:21:51 +0000 (0:00:05.288) 0:06:26.643 ******** 2026-01-05 01:21:57.510003 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:21:57.510010 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:21:57.510071 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:21:57.510079 | orchestrator | 2026-01-05 01:21:57.510086 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-01-05 01:21:57.510094 | orchestrator | Monday 05 January 2026 01:21:52 +0000 (0:00:00.728) 0:06:27.371 ******** 2026-01-05 01:21:57.510101 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:21:57.510108 | orchestrator | 2026-01-05 01:21:57.510116 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-01-05 01:21:57.510123 | orchestrator | Monday 05 January 2026 01:21:53 +0000 (0:00:00.807) 0:06:28.179 ******** 2026-01-05 01:21:57.510130 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:21:57.510137 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:21:57.510144 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:21:57.510151 | orchestrator | 2026-01-05 01:21:57.510159 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-01-05 01:21:57.510166 | orchestrator | Monday 05 January 2026 01:21:53 +0000 (0:00:00.349) 0:06:28.528 ******** 2026-01-05 01:21:57.510173 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:21:57.510181 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:21:57.510188 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:21:57.510195 | orchestrator | 2026-01-05 01:21:57.510202 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-01-05 01:21:57.510209 | orchestrator | Monday 05 January 2026 01:21:54 +0000 (0:00:01.286) 0:06:29.815 ******** 2026-01-05 01:21:57.510216 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-05 01:21:57.510224 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-05 01:21:57.510231 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-05 01:21:57.510238 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:21:57.510247 | orchestrator | 2026-01-05 01:21:57.510259 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-01-05 01:21:57.510269 | orchestrator | Monday 05 January 2026 01:21:55 +0000 (0:00:00.895) 0:06:30.710 ******** 2026-01-05 01:21:57.510288 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:21:57.510300 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:21:57.510312 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:21:57.510392 | orchestrator | 2026-01-05 01:21:57.510403 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-01-05 01:21:57.510411 | orchestrator | 2026-01-05 01:21:57.510418 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-05 01:21:57.510434 | orchestrator | Monday 05 January 2026 01:21:56 +0000 (0:00:00.888) 0:06:31.599 ******** 2026-01-05 01:21:57.510442 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:21:57.510452 | orchestrator | 2026-01-05 01:21:57.510468 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-05 01:22:17.186918 | orchestrator | Monday 05 January 2026 01:21:57 +0000 (0:00:00.717) 0:06:32.316 ******** 2026-01-05 01:22:17.187042 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:22:17.187057 | orchestrator | 2026-01-05 01:22:17.187065 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-05 01:22:17.187073 | orchestrator | Monday 05 January 2026 01:21:58 +0000 (0:00:00.776) 0:06:33.093 ******** 2026-01-05 01:22:17.187077 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:22:17.187083 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:22:17.187087 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:22:17.187091 | orchestrator | 2026-01-05 01:22:17.187096 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-05 01:22:17.187100 | orchestrator | Monday 05 January 2026 01:21:58 +0000 (0:00:00.349) 0:06:33.443 ******** 2026-01-05 01:22:17.187104 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:22:17.187109 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:22:17.187113 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:22:17.187117 | orchestrator | 2026-01-05 01:22:17.187121 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-05 01:22:17.187124 | orchestrator | Monday 05 January 2026 01:21:59 +0000 (0:00:00.765) 0:06:34.209 ******** 2026-01-05 01:22:17.187128 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:22:17.187132 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:22:17.187136 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:22:17.187139 | orchestrator | 2026-01-05 01:22:17.187143 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-05 01:22:17.187147 | orchestrator | Monday 05 January 2026 01:22:00 +0000 (0:00:01.073) 0:06:35.283 ******** 2026-01-05 01:22:17.187151 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:22:17.187154 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:22:17.187158 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:22:17.187162 | orchestrator | 2026-01-05 01:22:17.187166 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-05 01:22:17.187170 | orchestrator | Monday 05 January 2026 01:22:01 +0000 (0:00:00.801) 0:06:36.085 ******** 2026-01-05 01:22:17.187174 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:22:17.187178 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:22:17.187182 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:22:17.187185 | orchestrator | 2026-01-05 01:22:17.187189 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-05 01:22:17.187193 | orchestrator | Monday 05 January 2026 01:22:01 +0000 (0:00:00.340) 0:06:36.425 ******** 2026-01-05 01:22:17.187197 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:22:17.187200 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:22:17.187204 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:22:17.187208 | orchestrator | 2026-01-05 01:22:17.187226 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-05 01:22:17.187230 | orchestrator | Monday 05 January 2026 01:22:01 +0000 (0:00:00.357) 0:06:36.782 ******** 2026-01-05 01:22:17.187233 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:22:17.187237 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:22:17.187241 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:22:17.187245 | orchestrator | 2026-01-05 01:22:17.187249 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-05 01:22:17.187252 | orchestrator | Monday 05 January 2026 01:22:02 +0000 (0:00:00.628) 0:06:37.411 ******** 2026-01-05 01:22:17.187284 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:22:17.187289 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:22:17.187292 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:22:17.187296 | orchestrator | 2026-01-05 01:22:17.187300 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-05 01:22:17.187304 | orchestrator | Monday 05 January 2026 01:22:03 +0000 (0:00:00.792) 0:06:38.204 ******** 2026-01-05 01:22:17.187307 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:22:17.187311 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:22:17.187315 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:22:17.187318 | orchestrator | 2026-01-05 01:22:17.187322 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-05 01:22:17.187326 | orchestrator | Monday 05 January 2026 01:22:04 +0000 (0:00:00.808) 0:06:39.012 ******** 2026-01-05 01:22:17.187331 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:22:17.187335 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:22:17.187338 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:22:17.187342 | orchestrator | 2026-01-05 01:22:17.187346 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-05 01:22:17.187350 | orchestrator | Monday 05 January 2026 01:22:04 +0000 (0:00:00.344) 0:06:39.357 ******** 2026-01-05 01:22:17.187354 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:22:17.187357 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:22:17.187361 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:22:17.187365 | orchestrator | 2026-01-05 01:22:17.187369 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-05 01:22:17.187372 | orchestrator | Monday 05 January 2026 01:22:05 +0000 (0:00:00.603) 0:06:39.961 ******** 2026-01-05 01:22:17.187376 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:22:17.187380 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:22:17.187384 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:22:17.187387 | orchestrator | 2026-01-05 01:22:17.187391 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-05 01:22:17.187395 | orchestrator | Monday 05 January 2026 01:22:05 +0000 (0:00:00.391) 0:06:40.352 ******** 2026-01-05 01:22:17.187398 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:22:17.187402 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:22:17.187406 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:22:17.187410 | orchestrator | 2026-01-05 01:22:17.187413 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-05 01:22:17.187418 | orchestrator | Monday 05 January 2026 01:22:05 +0000 (0:00:00.365) 0:06:40.717 ******** 2026-01-05 01:22:17.187424 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:22:17.187430 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:22:17.187479 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:22:17.187486 | orchestrator | 2026-01-05 01:22:17.187492 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-05 01:22:17.187516 | orchestrator | Monday 05 January 2026 01:22:06 +0000 (0:00:00.352) 0:06:41.069 ******** 2026-01-05 01:22:17.187523 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:22:17.187530 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:22:17.187536 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:22:17.187542 | orchestrator | 2026-01-05 01:22:17.187549 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-05 01:22:17.187555 | orchestrator | Monday 05 January 2026 01:22:06 +0000 (0:00:00.620) 0:06:41.690 ******** 2026-01-05 01:22:17.187561 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:22:17.187568 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:22:17.187574 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:22:17.187580 | orchestrator | 2026-01-05 01:22:17.187587 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-05 01:22:17.187593 | orchestrator | Monday 05 January 2026 01:22:07 +0000 (0:00:00.357) 0:06:42.048 ******** 2026-01-05 01:22:17.187599 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:22:17.187606 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:22:17.187616 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:22:17.187621 | orchestrator | 2026-01-05 01:22:17.187625 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-05 01:22:17.187630 | orchestrator | Monday 05 January 2026 01:22:07 +0000 (0:00:00.344) 0:06:42.392 ******** 2026-01-05 01:22:17.187634 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:22:17.187638 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:22:17.187643 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:22:17.187649 | orchestrator | 2026-01-05 01:22:17.187659 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-05 01:22:17.187667 | orchestrator | Monday 05 January 2026 01:22:08 +0000 (0:00:00.658) 0:06:43.051 ******** 2026-01-05 01:22:17.187673 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:22:17.187679 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:22:17.187685 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:22:17.187691 | orchestrator | 2026-01-05 01:22:17.187697 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-01-05 01:22:17.187704 | orchestrator | Monday 05 January 2026 01:22:08 +0000 (0:00:00.579) 0:06:43.630 ******** 2026-01-05 01:22:17.187709 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:22:17.187715 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:22:17.187721 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:22:17.187727 | orchestrator | 2026-01-05 01:22:17.187735 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-01-05 01:22:17.187742 | orchestrator | Monday 05 January 2026 01:22:09 +0000 (0:00:00.352) 0:06:43.983 ******** 2026-01-05 01:22:17.187749 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-05 01:22:17.187756 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-05 01:22:17.187768 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-05 01:22:17.187775 | orchestrator | 2026-01-05 01:22:17.187781 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-01-05 01:22:17.187787 | orchestrator | Monday 05 January 2026 01:22:10 +0000 (0:00:00.931) 0:06:44.915 ******** 2026-01-05 01:22:17.187794 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:22:17.187800 | orchestrator | 2026-01-05 01:22:17.187806 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-01-05 01:22:17.187813 | orchestrator | Monday 05 January 2026 01:22:10 +0000 (0:00:00.833) 0:06:45.749 ******** 2026-01-05 01:22:17.187819 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:22:17.187826 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:22:17.187832 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:22:17.187838 | orchestrator | 2026-01-05 01:22:17.187845 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-01-05 01:22:17.187851 | orchestrator | Monday 05 January 2026 01:22:11 +0000 (0:00:00.335) 0:06:46.084 ******** 2026-01-05 01:22:17.187858 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:22:17.187865 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:22:17.187871 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:22:17.187878 | orchestrator | 2026-01-05 01:22:17.187884 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-01-05 01:22:17.187888 | orchestrator | Monday 05 January 2026 01:22:11 +0000 (0:00:00.327) 0:06:46.411 ******** 2026-01-05 01:22:17.187892 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:22:17.187897 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:22:17.187904 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:22:17.187910 | orchestrator | 2026-01-05 01:22:17.187916 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-01-05 01:22:17.187922 | orchestrator | Monday 05 January 2026 01:22:12 +0000 (0:00:01.003) 0:06:47.415 ******** 2026-01-05 01:22:17.187928 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:22:17.187934 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:22:17.187946 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:22:17.187953 | orchestrator | 2026-01-05 01:22:17.187959 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-01-05 01:22:17.187965 | orchestrator | Monday 05 January 2026 01:22:12 +0000 (0:00:00.365) 0:06:47.780 ******** 2026-01-05 01:22:17.187971 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-05 01:22:17.187978 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-05 01:22:17.187984 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-05 01:22:17.187988 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-05 01:22:17.187992 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-05 01:22:17.187996 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-05 01:22:17.188010 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-05 01:23:20.035852 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-05 01:23:20.035952 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-05 01:23:20.035966 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-05 01:23:20.035976 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-05 01:23:20.035985 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-05 01:23:20.035994 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-05 01:23:20.036003 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-05 01:23:20.036012 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-05 01:23:20.036022 | orchestrator | 2026-01-05 01:23:20.036032 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-01-05 01:23:20.036041 | orchestrator | Monday 05 January 2026 01:22:17 +0000 (0:00:04.205) 0:06:51.985 ******** 2026-01-05 01:23:20.036050 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:23:20.036061 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:23:20.036069 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:23:20.036078 | orchestrator | 2026-01-05 01:23:20.036088 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-01-05 01:23:20.036097 | orchestrator | Monday 05 January 2026 01:22:17 +0000 (0:00:00.582) 0:06:52.568 ******** 2026-01-05 01:23:20.036106 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:23:20.036116 | orchestrator | 2026-01-05 01:23:20.036125 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-01-05 01:23:20.036134 | orchestrator | Monday 05 January 2026 01:22:18 +0000 (0:00:00.535) 0:06:53.103 ******** 2026-01-05 01:23:20.036143 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-05 01:23:20.036152 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-05 01:23:20.036161 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-05 01:23:20.036170 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-01-05 01:23:20.036180 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-01-05 01:23:20.036189 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-01-05 01:23:20.036198 | orchestrator | 2026-01-05 01:23:20.036222 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-01-05 01:23:20.036233 | orchestrator | Monday 05 January 2026 01:22:19 +0000 (0:00:01.128) 0:06:54.232 ******** 2026-01-05 01:23:20.036265 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 01:23:20.036274 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-05 01:23:20.036283 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-05 01:23:20.036292 | orchestrator | 2026-01-05 01:23:20.036301 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-01-05 01:23:20.036310 | orchestrator | Monday 05 January 2026 01:22:21 +0000 (0:00:02.341) 0:06:56.573 ******** 2026-01-05 01:23:20.036319 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-05 01:23:20.036328 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-05 01:23:20.036337 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:23:20.036346 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-05 01:23:20.036355 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-05 01:23:20.036364 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:23:20.036373 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-05 01:23:20.036382 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-05 01:23:20.036390 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:23:20.036399 | orchestrator | 2026-01-05 01:23:20.036408 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-01-05 01:23:20.036417 | orchestrator | Monday 05 January 2026 01:22:23 +0000 (0:00:01.623) 0:06:58.196 ******** 2026-01-05 01:23:20.036426 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-05 01:23:20.036434 | orchestrator | 2026-01-05 01:23:20.036443 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-01-05 01:23:20.036452 | orchestrator | Monday 05 January 2026 01:22:25 +0000 (0:00:02.314) 0:07:00.510 ******** 2026-01-05 01:23:20.036461 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:23:20.036470 | orchestrator | 2026-01-05 01:23:20.036479 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-01-05 01:23:20.036487 | orchestrator | Monday 05 January 2026 01:22:26 +0000 (0:00:00.601) 0:07:01.112 ******** 2026-01-05 01:23:20.036498 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-13a82a55-1430-5b0a-a1a4-baa9d6ca4414', 'data_vg': 'ceph-13a82a55-1430-5b0a-a1a4-baa9d6ca4414'}) 2026-01-05 01:23:20.036509 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-9b63b326-8bb9-546b-aabb-a628fef076ec', 'data_vg': 'ceph-9b63b326-8bb9-546b-aabb-a628fef076ec'}) 2026-01-05 01:23:20.036518 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-cc420972-ce44-5a44-a5a6-a707e77471c5', 'data_vg': 'ceph-cc420972-ce44-5a44-a5a6-a707e77471c5'}) 2026-01-05 01:23:20.036543 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-b6ae7fca-c2f2-5e20-af6f-426bd4b4cc4c', 'data_vg': 'ceph-b6ae7fca-c2f2-5e20-af6f-426bd4b4cc4c'}) 2026-01-05 01:23:20.036553 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-124df3d1-788c-586c-b42c-9b6f84a90775', 'data_vg': 'ceph-124df3d1-788c-586c-b42c-9b6f84a90775'}) 2026-01-05 01:23:20.036564 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-62cfaa39-e4fc-5ede-b6ae-ee7ea3f2ad3e', 'data_vg': 'ceph-62cfaa39-e4fc-5ede-b6ae-ee7ea3f2ad3e'}) 2026-01-05 01:23:20.036573 | orchestrator | 2026-01-05 01:23:20.036582 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-01-05 01:23:20.036591 | orchestrator | Monday 05 January 2026 01:23:05 +0000 (0:00:39.153) 0:07:40.265 ******** 2026-01-05 01:23:20.036601 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:23:20.036610 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:23:20.036619 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:23:20.036628 | orchestrator | 2026-01-05 01:23:20.036637 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-01-05 01:23:20.036647 | orchestrator | Monday 05 January 2026 01:23:05 +0000 (0:00:00.351) 0:07:40.617 ******** 2026-01-05 01:23:20.036656 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:23:20.036672 | orchestrator | 2026-01-05 01:23:20.036681 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-01-05 01:23:20.036690 | orchestrator | Monday 05 January 2026 01:23:06 +0000 (0:00:00.594) 0:07:41.211 ******** 2026-01-05 01:23:20.036700 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:23:20.036709 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:23:20.036718 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:23:20.036727 | orchestrator | 2026-01-05 01:23:20.036737 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-01-05 01:23:20.036746 | orchestrator | Monday 05 January 2026 01:23:07 +0000 (0:00:01.040) 0:07:42.251 ******** 2026-01-05 01:23:20.036755 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:23:20.036787 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:23:20.036796 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:23:20.036805 | orchestrator | 2026-01-05 01:23:20.036813 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-01-05 01:23:20.036822 | orchestrator | Monday 05 January 2026 01:23:10 +0000 (0:00:02.839) 0:07:45.091 ******** 2026-01-05 01:23:20.036831 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:23:20.036840 | orchestrator | 2026-01-05 01:23:20.036849 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-01-05 01:23:20.036857 | orchestrator | Monday 05 January 2026 01:23:10 +0000 (0:00:00.548) 0:07:45.640 ******** 2026-01-05 01:23:20.036871 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:23:20.036880 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:23:20.036889 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:23:20.036898 | orchestrator | 2026-01-05 01:23:20.036907 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-01-05 01:23:20.036915 | orchestrator | Monday 05 January 2026 01:23:12 +0000 (0:00:01.580) 0:07:47.221 ******** 2026-01-05 01:23:20.036924 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:23:20.036933 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:23:20.036942 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:23:20.036950 | orchestrator | 2026-01-05 01:23:20.036959 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-01-05 01:23:20.036968 | orchestrator | Monday 05 January 2026 01:23:13 +0000 (0:00:01.261) 0:07:48.483 ******** 2026-01-05 01:23:20.036977 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:23:20.036985 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:23:20.036994 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:23:20.037003 | orchestrator | 2026-01-05 01:23:20.037012 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-01-05 01:23:20.037020 | orchestrator | Monday 05 January 2026 01:23:15 +0000 (0:00:01.904) 0:07:50.388 ******** 2026-01-05 01:23:20.037029 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:23:20.037038 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:23:20.037047 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:23:20.037056 | orchestrator | 2026-01-05 01:23:20.037064 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-01-05 01:23:20.037073 | orchestrator | Monday 05 January 2026 01:23:15 +0000 (0:00:00.354) 0:07:50.742 ******** 2026-01-05 01:23:20.037082 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:23:20.037091 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:23:20.037099 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:23:20.037108 | orchestrator | 2026-01-05 01:23:20.037117 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-01-05 01:23:20.037125 | orchestrator | Monday 05 January 2026 01:23:16 +0000 (0:00:00.664) 0:07:51.407 ******** 2026-01-05 01:23:20.037134 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-05 01:23:20.037143 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-01-05 01:23:20.037152 | orchestrator | ok: [testbed-node-5] => (item=3) 2026-01-05 01:23:20.037167 | orchestrator | ok: [testbed-node-3] => (item=4) 2026-01-05 01:23:20.037176 | orchestrator | ok: [testbed-node-4] => (item=5) 2026-01-05 01:23:20.037184 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-01-05 01:23:20.037193 | orchestrator | 2026-01-05 01:23:20.037202 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-01-05 01:23:20.037211 | orchestrator | Monday 05 January 2026 01:23:17 +0000 (0:00:01.124) 0:07:52.531 ******** 2026-01-05 01:23:20.037219 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-01-05 01:23:20.037228 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-01-05 01:23:20.037237 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-01-05 01:23:20.037246 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-01-05 01:23:20.037254 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-01-05 01:23:20.037263 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-01-05 01:23:20.037271 | orchestrator | 2026-01-05 01:23:20.037280 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-01-05 01:23:20.037296 | orchestrator | Monday 05 January 2026 01:23:20 +0000 (0:00:02.306) 0:07:54.837 ******** 2026-01-05 01:23:56.179458 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-01-05 01:23:56.179579 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-01-05 01:23:56.179592 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-01-05 01:23:56.179601 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-01-05 01:23:56.179609 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-01-05 01:23:56.179618 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-01-05 01:23:56.179657 | orchestrator | 2026-01-05 01:23:56.179668 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-01-05 01:23:56.179678 | orchestrator | Monday 05 January 2026 01:23:24 +0000 (0:00:04.072) 0:07:58.910 ******** 2026-01-05 01:23:56.179687 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:23:56.179695 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:23:56.179705 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-01-05 01:23:56.179724 | orchestrator | 2026-01-05 01:23:56.179732 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-01-05 01:23:56.179740 | orchestrator | Monday 05 January 2026 01:23:27 +0000 (0:00:03.183) 0:08:02.094 ******** 2026-01-05 01:23:56.179748 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:23:56.179756 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:23:56.179764 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-01-05 01:23:56.179773 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-01-05 01:23:56.179781 | orchestrator | 2026-01-05 01:23:56.179790 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-01-05 01:23:56.179798 | orchestrator | Monday 05 January 2026 01:23:39 +0000 (0:00:12.585) 0:08:14.679 ******** 2026-01-05 01:23:56.179806 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:23:56.179814 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:23:56.179822 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:23:56.179830 | orchestrator | 2026-01-05 01:23:56.179838 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-05 01:23:56.179845 | orchestrator | Monday 05 January 2026 01:23:41 +0000 (0:00:01.149) 0:08:15.829 ******** 2026-01-05 01:23:56.179854 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:23:56.179861 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:23:56.179869 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:23:56.179877 | orchestrator | 2026-01-05 01:23:56.179886 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-01-05 01:23:56.179894 | orchestrator | Monday 05 January 2026 01:23:41 +0000 (0:00:00.376) 0:08:16.206 ******** 2026-01-05 01:23:56.179919 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:23:56.179927 | orchestrator | 2026-01-05 01:23:56.179970 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-01-05 01:23:56.179979 | orchestrator | Monday 05 January 2026 01:23:42 +0000 (0:00:00.855) 0:08:17.061 ******** 2026-01-05 01:23:56.179987 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 01:23:56.179995 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 01:23:56.180002 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 01:23:56.180010 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:23:56.180018 | orchestrator | 2026-01-05 01:23:56.180026 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-01-05 01:23:56.180034 | orchestrator | Monday 05 January 2026 01:23:42 +0000 (0:00:00.419) 0:08:17.480 ******** 2026-01-05 01:23:56.180042 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:23:56.180049 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:23:56.180058 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:23:56.180066 | orchestrator | 2026-01-05 01:23:56.180074 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-01-05 01:23:56.180081 | orchestrator | Monday 05 January 2026 01:23:43 +0000 (0:00:00.343) 0:08:17.823 ******** 2026-01-05 01:23:56.180089 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:23:56.180096 | orchestrator | 2026-01-05 01:23:56.180104 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-01-05 01:23:56.180112 | orchestrator | Monday 05 January 2026 01:23:43 +0000 (0:00:00.244) 0:08:18.068 ******** 2026-01-05 01:23:56.180120 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:23:56.180128 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:23:56.180135 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:23:56.180143 | orchestrator | 2026-01-05 01:23:56.180151 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-01-05 01:23:56.180159 | orchestrator | Monday 05 January 2026 01:23:43 +0000 (0:00:00.359) 0:08:18.427 ******** 2026-01-05 01:23:56.180166 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:23:56.180174 | orchestrator | 2026-01-05 01:23:56.180182 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-01-05 01:23:56.180190 | orchestrator | Monday 05 January 2026 01:23:43 +0000 (0:00:00.231) 0:08:18.659 ******** 2026-01-05 01:23:56.180198 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:23:56.180205 | orchestrator | 2026-01-05 01:23:56.180213 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-01-05 01:23:56.180220 | orchestrator | Monday 05 January 2026 01:23:44 +0000 (0:00:00.869) 0:08:19.528 ******** 2026-01-05 01:23:56.180228 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:23:56.180235 | orchestrator | 2026-01-05 01:23:56.180243 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-01-05 01:23:56.180250 | orchestrator | Monday 05 January 2026 01:23:44 +0000 (0:00:00.153) 0:08:19.682 ******** 2026-01-05 01:23:56.180258 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:23:56.180266 | orchestrator | 2026-01-05 01:23:56.180274 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-01-05 01:23:56.180282 | orchestrator | Monday 05 January 2026 01:23:45 +0000 (0:00:00.244) 0:08:19.926 ******** 2026-01-05 01:23:56.180289 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:23:56.180297 | orchestrator | 2026-01-05 01:23:56.180305 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-01-05 01:23:56.180331 | orchestrator | Monday 05 January 2026 01:23:45 +0000 (0:00:00.252) 0:08:20.179 ******** 2026-01-05 01:23:56.180339 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 01:23:56.180346 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 01:23:56.180353 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 01:23:56.180361 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:23:56.180369 | orchestrator | 2026-01-05 01:23:56.180377 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-01-05 01:23:56.180393 | orchestrator | Monday 05 January 2026 01:23:45 +0000 (0:00:00.493) 0:08:20.673 ******** 2026-01-05 01:23:56.180401 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:23:56.180409 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:23:56.180417 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:23:56.180424 | orchestrator | 2026-01-05 01:23:56.180431 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-01-05 01:23:56.180439 | orchestrator | Monday 05 January 2026 01:23:46 +0000 (0:00:00.332) 0:08:21.006 ******** 2026-01-05 01:23:56.180447 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:23:56.180456 | orchestrator | 2026-01-05 01:23:56.180464 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-01-05 01:23:56.180472 | orchestrator | Monday 05 January 2026 01:23:46 +0000 (0:00:00.234) 0:08:21.240 ******** 2026-01-05 01:23:56.180479 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:23:56.180487 | orchestrator | 2026-01-05 01:23:56.180494 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-01-05 01:23:56.180502 | orchestrator | 2026-01-05 01:23:56.180509 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-05 01:23:56.180517 | orchestrator | Monday 05 January 2026 01:23:47 +0000 (0:00:00.993) 0:08:22.233 ******** 2026-01-05 01:23:56.180525 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:23:56.180535 | orchestrator | 2026-01-05 01:23:56.180543 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-05 01:23:56.180550 | orchestrator | Monday 05 January 2026 01:23:48 +0000 (0:00:01.259) 0:08:23.493 ******** 2026-01-05 01:23:56.180558 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:23:56.180566 | orchestrator | 2026-01-05 01:23:56.180573 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-05 01:23:56.180581 | orchestrator | Monday 05 January 2026 01:23:50 +0000 (0:00:01.341) 0:08:24.834 ******** 2026-01-05 01:23:56.180588 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:23:56.180669 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:23:56.180685 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:23:56.180693 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:23:56.180701 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:23:56.180708 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:23:56.180715 | orchestrator | 2026-01-05 01:23:56.180722 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-05 01:23:56.180730 | orchestrator | Monday 05 January 2026 01:23:51 +0000 (0:00:01.148) 0:08:25.983 ******** 2026-01-05 01:23:56.180737 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:23:56.180744 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:23:56.180751 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:23:56.180758 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:23:56.180766 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:23:56.180774 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:23:56.180781 | orchestrator | 2026-01-05 01:23:56.180789 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-05 01:23:56.180797 | orchestrator | Monday 05 January 2026 01:23:52 +0000 (0:00:01.044) 0:08:27.027 ******** 2026-01-05 01:23:56.180804 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:23:56.180812 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:23:56.180820 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:23:56.180827 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:23:56.180835 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:23:56.180842 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:23:56.180849 | orchestrator | 2026-01-05 01:23:56.180857 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-05 01:23:56.180864 | orchestrator | Monday 05 January 2026 01:23:53 +0000 (0:00:01.055) 0:08:28.083 ******** 2026-01-05 01:23:56.180880 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:23:56.180889 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:23:56.180896 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:23:56.180904 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:23:56.180911 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:23:56.180919 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:23:56.180926 | orchestrator | 2026-01-05 01:23:56.180934 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-05 01:23:56.180983 | orchestrator | Monday 05 January 2026 01:23:54 +0000 (0:00:00.784) 0:08:28.868 ******** 2026-01-05 01:23:56.180992 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:23:56.180999 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:23:56.181007 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:23:56.181015 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:23:56.181023 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:23:56.181030 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:23:56.181038 | orchestrator | 2026-01-05 01:23:56.181045 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-05 01:23:56.181053 | orchestrator | Monday 05 January 2026 01:23:55 +0000 (0:00:01.419) 0:08:30.287 ******** 2026-01-05 01:23:56.181060 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:23:56.181068 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:23:56.181075 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:23:56.181083 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:23:56.181090 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:23:56.181097 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:23:56.181104 | orchestrator | 2026-01-05 01:23:56.181112 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-05 01:24:28.891625 | orchestrator | Monday 05 January 2026 01:23:56 +0000 (0:00:00.699) 0:08:30.987 ******** 2026-01-05 01:24:28.891744 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:24:28.891756 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:24:28.891763 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:24:28.891770 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:24:28.891777 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:24:28.891785 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:24:28.891799 | orchestrator | 2026-01-05 01:24:28.891816 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-05 01:24:28.891832 | orchestrator | Monday 05 January 2026 01:23:57 +0000 (0:00:00.896) 0:08:31.884 ******** 2026-01-05 01:24:28.891839 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:24:28.891848 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:24:28.891854 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:24:28.891861 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:24:28.891867 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:24:28.891873 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:24:28.891879 | orchestrator | 2026-01-05 01:24:28.891886 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-05 01:24:28.891892 | orchestrator | Monday 05 January 2026 01:23:58 +0000 (0:00:01.117) 0:08:33.001 ******** 2026-01-05 01:24:28.891898 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:24:28.891905 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:24:28.891911 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:24:28.891917 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:24:28.891923 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:24:28.891929 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:24:28.891935 | orchestrator | 2026-01-05 01:24:28.891941 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-05 01:24:28.891946 | orchestrator | Monday 05 January 2026 01:23:59 +0000 (0:00:01.323) 0:08:34.325 ******** 2026-01-05 01:24:28.891953 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:24:28.891958 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:24:28.891965 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:24:28.891970 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:24:28.891990 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:24:28.891994 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:24:28.891997 | orchestrator | 2026-01-05 01:24:28.892001 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-05 01:24:28.892005 | orchestrator | Monday 05 January 2026 01:24:00 +0000 (0:00:00.615) 0:08:34.941 ******** 2026-01-05 01:24:28.892009 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:24:28.892013 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:24:28.892017 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:24:28.892021 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:24:28.892024 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:24:28.892028 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:24:28.892032 | orchestrator | 2026-01-05 01:24:28.892049 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-05 01:24:28.892053 | orchestrator | Monday 05 January 2026 01:24:01 +0000 (0:00:00.952) 0:08:35.893 ******** 2026-01-05 01:24:28.892056 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:24:28.892069 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:24:28.892073 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:24:28.892082 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:24:28.892086 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:24:28.892090 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:24:28.892094 | orchestrator | 2026-01-05 01:24:28.892147 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-05 01:24:28.892155 | orchestrator | Monday 05 January 2026 01:24:01 +0000 (0:00:00.658) 0:08:36.552 ******** 2026-01-05 01:24:28.892161 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:24:28.892168 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:24:28.892174 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:24:28.892180 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:24:28.892186 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:24:28.892192 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:24:28.892198 | orchestrator | 2026-01-05 01:24:28.892204 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-05 01:24:28.892210 | orchestrator | Monday 05 January 2026 01:24:02 +0000 (0:00:00.856) 0:08:37.408 ******** 2026-01-05 01:24:28.892217 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:24:28.892224 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:24:28.892230 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:24:28.892237 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:24:28.892243 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:24:28.892250 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:24:28.892254 | orchestrator | 2026-01-05 01:24:28.892259 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-05 01:24:28.892264 | orchestrator | Monday 05 January 2026 01:24:03 +0000 (0:00:00.683) 0:08:38.092 ******** 2026-01-05 01:24:28.892268 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:24:28.892273 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:24:28.892277 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:24:28.892282 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:24:28.892287 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:24:28.892291 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:24:28.892295 | orchestrator | 2026-01-05 01:24:28.892300 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-05 01:24:28.892304 | orchestrator | Monday 05 January 2026 01:24:04 +0000 (0:00:00.874) 0:08:38.966 ******** 2026-01-05 01:24:28.892309 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:24:28.892313 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:24:28.892317 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:24:28.892321 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:24:28.892324 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:24:28.892328 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:24:28.892332 | orchestrator | 2026-01-05 01:24:28.892336 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-05 01:24:28.892346 | orchestrator | Monday 05 January 2026 01:24:04 +0000 (0:00:00.696) 0:08:39.663 ******** 2026-01-05 01:24:28.892350 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:24:28.892354 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:24:28.892358 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:24:28.892362 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:24:28.892366 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:24:28.892370 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:24:28.892373 | orchestrator | 2026-01-05 01:24:28.892392 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-05 01:24:28.892396 | orchestrator | Monday 05 January 2026 01:24:05 +0000 (0:00:00.938) 0:08:40.601 ******** 2026-01-05 01:24:28.892400 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:24:28.892410 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:24:28.892414 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:24:28.892418 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:24:28.892422 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:24:28.892425 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:24:28.892435 | orchestrator | 2026-01-05 01:24:28.892439 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-05 01:24:28.892443 | orchestrator | Monday 05 January 2026 01:24:06 +0000 (0:00:00.646) 0:08:41.248 ******** 2026-01-05 01:24:28.892447 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:24:28.892451 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:24:28.892454 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:24:28.892458 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:24:28.892462 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:24:28.892466 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:24:28.892470 | orchestrator | 2026-01-05 01:24:28.892473 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-01-05 01:24:28.892477 | orchestrator | Monday 05 January 2026 01:24:07 +0000 (0:00:01.357) 0:08:42.605 ******** 2026-01-05 01:24:28.892481 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-05 01:24:28.892485 | orchestrator | 2026-01-05 01:24:28.892489 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-01-05 01:24:28.892493 | orchestrator | Monday 05 January 2026 01:24:11 +0000 (0:00:04.195) 0:08:46.801 ******** 2026-01-05 01:24:28.892497 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-05 01:24:28.892501 | orchestrator | 2026-01-05 01:24:28.892504 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-01-05 01:24:28.892508 | orchestrator | Monday 05 January 2026 01:24:14 +0000 (0:00:02.672) 0:08:49.473 ******** 2026-01-05 01:24:28.892512 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:24:28.892517 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:24:28.892523 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:24:28.892529 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:24:28.892536 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:24:28.892541 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:24:28.892547 | orchestrator | 2026-01-05 01:24:28.892554 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-01-05 01:24:28.892560 | orchestrator | Monday 05 January 2026 01:24:16 +0000 (0:00:01.637) 0:08:51.111 ******** 2026-01-05 01:24:28.892566 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:24:28.892573 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:24:28.892579 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:24:28.892585 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:24:28.892598 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:24:28.892604 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:24:28.892610 | orchestrator | 2026-01-05 01:24:28.892617 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-01-05 01:24:28.892624 | orchestrator | Monday 05 January 2026 01:24:17 +0000 (0:00:01.296) 0:08:52.407 ******** 2026-01-05 01:24:28.892630 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:24:28.892652 | orchestrator | 2026-01-05 01:24:28.892656 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-01-05 01:24:28.892660 | orchestrator | Monday 05 January 2026 01:24:18 +0000 (0:00:01.323) 0:08:53.731 ******** 2026-01-05 01:24:28.892664 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:24:28.892667 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:24:28.892671 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:24:28.892675 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:24:28.892679 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:24:28.892682 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:24:28.892686 | orchestrator | 2026-01-05 01:24:28.892690 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-01-05 01:24:28.892694 | orchestrator | Monday 05 January 2026 01:24:20 +0000 (0:00:01.636) 0:08:55.368 ******** 2026-01-05 01:24:28.892697 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:24:28.892701 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:24:28.892705 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:24:28.892708 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:24:28.892712 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:24:28.892716 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:24:28.892720 | orchestrator | 2026-01-05 01:24:28.892723 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-01-05 01:24:28.892727 | orchestrator | Monday 05 January 2026 01:24:24 +0000 (0:00:03.811) 0:08:59.180 ******** 2026-01-05 01:24:28.892731 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:24:28.892736 | orchestrator | 2026-01-05 01:24:28.892739 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-01-05 01:24:28.892743 | orchestrator | Monday 05 January 2026 01:24:25 +0000 (0:00:01.388) 0:09:00.568 ******** 2026-01-05 01:24:28.892747 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:24:28.892751 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:24:28.892754 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:24:28.892758 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:24:28.892762 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:24:28.892766 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:24:28.892769 | orchestrator | 2026-01-05 01:24:28.892773 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-01-05 01:24:28.892777 | orchestrator | Monday 05 January 2026 01:24:26 +0000 (0:00:00.657) 0:09:01.226 ******** 2026-01-05 01:24:28.892781 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:24:28.892785 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:24:28.892788 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:24:28.892792 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:24:28.892796 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:24:28.892800 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:24:28.892803 | orchestrator | 2026-01-05 01:24:28.892811 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-01-05 01:24:57.883030 | orchestrator | Monday 05 January 2026 01:24:28 +0000 (0:00:02.464) 0:09:03.691 ******** 2026-01-05 01:24:57.883142 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:24:57.883156 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:24:57.883165 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:24:57.883174 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:24:57.883183 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:24:57.883192 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:24:57.883201 | orchestrator | 2026-01-05 01:24:57.883211 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-01-05 01:24:57.883220 | orchestrator | 2026-01-05 01:24:57.883257 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-05 01:24:57.883271 | orchestrator | Monday 05 January 2026 01:24:30 +0000 (0:00:01.198) 0:09:04.889 ******** 2026-01-05 01:24:57.883321 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:24:57.883332 | orchestrator | 2026-01-05 01:24:57.883341 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-05 01:24:57.883350 | orchestrator | Monday 05 January 2026 01:24:30 +0000 (0:00:00.557) 0:09:05.446 ******** 2026-01-05 01:24:57.883359 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:24:57.883367 | orchestrator | 2026-01-05 01:24:57.883376 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-05 01:24:57.883384 | orchestrator | Monday 05 January 2026 01:24:31 +0000 (0:00:00.787) 0:09:06.234 ******** 2026-01-05 01:24:57.883393 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:24:57.883403 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:24:57.883411 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:24:57.883420 | orchestrator | 2026-01-05 01:24:57.883428 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-05 01:24:57.883437 | orchestrator | Monday 05 January 2026 01:24:31 +0000 (0:00:00.337) 0:09:06.572 ******** 2026-01-05 01:24:57.883445 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:24:57.883454 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:24:57.883462 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:24:57.883471 | orchestrator | 2026-01-05 01:24:57.883480 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-05 01:24:57.883488 | orchestrator | Monday 05 January 2026 01:24:32 +0000 (0:00:00.751) 0:09:07.323 ******** 2026-01-05 01:24:57.883497 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:24:57.883505 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:24:57.883514 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:24:57.883522 | orchestrator | 2026-01-05 01:24:57.883544 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-05 01:24:57.883555 | orchestrator | Monday 05 January 2026 01:24:33 +0000 (0:00:00.792) 0:09:08.116 ******** 2026-01-05 01:24:57.883565 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:24:57.883575 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:24:57.883585 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:24:57.883595 | orchestrator | 2026-01-05 01:24:57.883607 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-05 01:24:57.883618 | orchestrator | Monday 05 January 2026 01:24:34 +0000 (0:00:01.080) 0:09:09.197 ******** 2026-01-05 01:24:57.883627 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:24:57.883637 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:24:57.883647 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:24:57.883657 | orchestrator | 2026-01-05 01:24:57.883667 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-05 01:24:57.883677 | orchestrator | Monday 05 January 2026 01:24:34 +0000 (0:00:00.372) 0:09:09.570 ******** 2026-01-05 01:24:57.883687 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:24:57.883697 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:24:57.883708 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:24:57.883717 | orchestrator | 2026-01-05 01:24:57.883728 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-05 01:24:57.883738 | orchestrator | Monday 05 January 2026 01:24:35 +0000 (0:00:00.321) 0:09:09.891 ******** 2026-01-05 01:24:57.883748 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:24:57.883758 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:24:57.883768 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:24:57.883778 | orchestrator | 2026-01-05 01:24:57.883788 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-05 01:24:57.883797 | orchestrator | Monday 05 January 2026 01:24:35 +0000 (0:00:00.344) 0:09:10.236 ******** 2026-01-05 01:24:57.883808 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:24:57.883818 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:24:57.883828 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:24:57.883845 | orchestrator | 2026-01-05 01:24:57.883855 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-05 01:24:57.883865 | orchestrator | Monday 05 January 2026 01:24:36 +0000 (0:00:01.105) 0:09:11.342 ******** 2026-01-05 01:24:57.883876 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:24:57.883885 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:24:57.883895 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:24:57.883906 | orchestrator | 2026-01-05 01:24:57.883916 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-05 01:24:57.883925 | orchestrator | Monday 05 January 2026 01:24:37 +0000 (0:00:00.770) 0:09:12.113 ******** 2026-01-05 01:24:57.883934 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:24:57.883942 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:24:57.883951 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:24:57.883959 | orchestrator | 2026-01-05 01:24:57.883968 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-05 01:24:57.883979 | orchestrator | Monday 05 January 2026 01:24:37 +0000 (0:00:00.313) 0:09:12.426 ******** 2026-01-05 01:24:57.883993 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:24:57.884008 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:24:57.884021 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:24:57.884035 | orchestrator | 2026-01-05 01:24:57.884050 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-05 01:24:57.884063 | orchestrator | Monday 05 January 2026 01:24:38 +0000 (0:00:00.640) 0:09:13.067 ******** 2026-01-05 01:24:57.884098 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:24:57.884113 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:24:57.884128 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:24:57.884144 | orchestrator | 2026-01-05 01:24:57.884159 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-05 01:24:57.884174 | orchestrator | Monday 05 January 2026 01:24:38 +0000 (0:00:00.365) 0:09:13.432 ******** 2026-01-05 01:24:57.884189 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:24:57.884203 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:24:57.884217 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:24:57.884290 | orchestrator | 2026-01-05 01:24:57.884307 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-05 01:24:57.884321 | orchestrator | Monday 05 January 2026 01:24:38 +0000 (0:00:00.358) 0:09:13.791 ******** 2026-01-05 01:24:57.884334 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:24:57.884350 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:24:57.884365 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:24:57.884378 | orchestrator | 2026-01-05 01:24:57.884393 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-05 01:24:57.884408 | orchestrator | Monday 05 January 2026 01:24:39 +0000 (0:00:00.360) 0:09:14.151 ******** 2026-01-05 01:24:57.884421 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:24:57.884436 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:24:57.884450 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:24:57.884465 | orchestrator | 2026-01-05 01:24:57.884478 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-05 01:24:57.884491 | orchestrator | Monday 05 January 2026 01:24:39 +0000 (0:00:00.603) 0:09:14.755 ******** 2026-01-05 01:24:57.884504 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:24:57.884517 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:24:57.884531 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:24:57.884545 | orchestrator | 2026-01-05 01:24:57.884560 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-05 01:24:57.884575 | orchestrator | Monday 05 January 2026 01:24:40 +0000 (0:00:00.348) 0:09:15.104 ******** 2026-01-05 01:24:57.884590 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:24:57.884606 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:24:57.884621 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:24:57.884635 | orchestrator | 2026-01-05 01:24:57.884649 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-05 01:24:57.884674 | orchestrator | Monday 05 January 2026 01:24:40 +0000 (0:00:00.335) 0:09:15.440 ******** 2026-01-05 01:24:57.884688 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:24:57.884702 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:24:57.884715 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:24:57.884730 | orchestrator | 2026-01-05 01:24:57.884744 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-05 01:24:57.884766 | orchestrator | Monday 05 January 2026 01:24:40 +0000 (0:00:00.349) 0:09:15.789 ******** 2026-01-05 01:24:57.884781 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:24:57.884795 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:24:57.884808 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:24:57.884823 | orchestrator | 2026-01-05 01:24:57.884837 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-01-05 01:24:57.884850 | orchestrator | Monday 05 January 2026 01:24:41 +0000 (0:00:00.876) 0:09:16.666 ******** 2026-01-05 01:24:57.884865 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:24:57.884878 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:24:57.884892 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-01-05 01:24:57.884907 | orchestrator | 2026-01-05 01:24:57.884921 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-01-05 01:24:57.884934 | orchestrator | Monday 05 January 2026 01:24:42 +0000 (0:00:00.428) 0:09:17.094 ******** 2026-01-05 01:24:57.884949 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-05 01:24:57.884963 | orchestrator | 2026-01-05 01:24:57.884976 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-01-05 01:24:57.884990 | orchestrator | Monday 05 January 2026 01:24:44 +0000 (0:00:02.238) 0:09:19.333 ******** 2026-01-05 01:24:57.885007 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-01-05 01:24:57.885024 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:24:57.885037 | orchestrator | 2026-01-05 01:24:57.885051 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-01-05 01:24:57.885065 | orchestrator | Monday 05 January 2026 01:24:45 +0000 (0:00:00.501) 0:09:19.834 ******** 2026-01-05 01:24:57.885083 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-05 01:24:57.885161 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-05 01:24:57.885178 | orchestrator | 2026-01-05 01:24:57.885192 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-01-05 01:24:57.885206 | orchestrator | Monday 05 January 2026 01:24:53 +0000 (0:00:08.559) 0:09:28.393 ******** 2026-01-05 01:24:57.885220 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-05 01:24:57.885257 | orchestrator | 2026-01-05 01:24:57.885272 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-01-05 01:24:57.885285 | orchestrator | Monday 05 January 2026 01:24:57 +0000 (0:00:03.720) 0:09:32.114 ******** 2026-01-05 01:24:57.885311 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:25:26.629300 | orchestrator | 2026-01-05 01:25:26.629458 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-01-05 01:25:26.629474 | orchestrator | Monday 05 January 2026 01:24:57 +0000 (0:00:00.579) 0:09:32.693 ******** 2026-01-05 01:25:26.629507 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-05 01:25:26.629517 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-05 01:25:26.629526 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-05 01:25:26.629535 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-01-05 01:25:26.629544 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-01-05 01:25:26.629553 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-01-05 01:25:26.629562 | orchestrator | 2026-01-05 01:25:26.629571 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-01-05 01:25:26.629580 | orchestrator | Monday 05 January 2026 01:24:58 +0000 (0:00:01.103) 0:09:33.797 ******** 2026-01-05 01:25:26.629588 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 01:25:26.629598 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-05 01:25:26.629607 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-05 01:25:26.629616 | orchestrator | 2026-01-05 01:25:26.629625 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-01-05 01:25:26.629634 | orchestrator | Monday 05 January 2026 01:25:01 +0000 (0:00:02.862) 0:09:36.660 ******** 2026-01-05 01:25:26.629642 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-05 01:25:26.629651 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-05 01:25:26.629660 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:25:26.629669 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-05 01:25:26.629678 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-05 01:25:26.629687 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:25:26.629695 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-05 01:25:26.629704 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-05 01:25:26.629713 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:25:26.629721 | orchestrator | 2026-01-05 01:25:26.629730 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-01-05 01:25:26.629753 | orchestrator | Monday 05 January 2026 01:25:03 +0000 (0:00:01.279) 0:09:37.939 ******** 2026-01-05 01:25:26.629765 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:25:26.629779 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:25:26.629794 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:25:26.629810 | orchestrator | 2026-01-05 01:25:26.629824 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-01-05 01:25:26.629838 | orchestrator | Monday 05 January 2026 01:25:06 +0000 (0:00:02.993) 0:09:40.932 ******** 2026-01-05 01:25:26.629852 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:25:26.629866 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:25:26.629879 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:25:26.629893 | orchestrator | 2026-01-05 01:25:26.629906 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-01-05 01:25:26.629922 | orchestrator | Monday 05 January 2026 01:25:06 +0000 (0:00:00.322) 0:09:41.254 ******** 2026-01-05 01:25:26.629936 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:25:26.629952 | orchestrator | 2026-01-05 01:25:26.629966 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-01-05 01:25:26.629980 | orchestrator | Monday 05 January 2026 01:25:07 +0000 (0:00:00.855) 0:09:42.110 ******** 2026-01-05 01:25:26.629995 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:25:26.630011 | orchestrator | 2026-01-05 01:25:26.630091 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-01-05 01:25:26.630105 | orchestrator | Monday 05 January 2026 01:25:07 +0000 (0:00:00.563) 0:09:42.673 ******** 2026-01-05 01:25:26.630114 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:25:26.630133 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:25:26.630142 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:25:26.630151 | orchestrator | 2026-01-05 01:25:26.630160 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-01-05 01:25:26.630168 | orchestrator | Monday 05 January 2026 01:25:09 +0000 (0:00:01.645) 0:09:44.319 ******** 2026-01-05 01:25:26.630177 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:25:26.630186 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:25:26.630194 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:25:26.630203 | orchestrator | 2026-01-05 01:25:26.630212 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-01-05 01:25:26.630220 | orchestrator | Monday 05 January 2026 01:25:10 +0000 (0:00:01.270) 0:09:45.589 ******** 2026-01-05 01:25:26.630229 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:25:26.630237 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:25:26.630246 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:25:26.630254 | orchestrator | 2026-01-05 01:25:26.630263 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-01-05 01:25:26.630272 | orchestrator | Monday 05 January 2026 01:25:12 +0000 (0:00:01.944) 0:09:47.534 ******** 2026-01-05 01:25:26.630280 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:25:26.630289 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:25:26.630297 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:25:26.630306 | orchestrator | 2026-01-05 01:25:26.630315 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-01-05 01:25:26.630323 | orchestrator | Monday 05 January 2026 01:25:14 +0000 (0:00:02.094) 0:09:49.629 ******** 2026-01-05 01:25:26.630332 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:25:26.630340 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:25:26.630416 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:25:26.630432 | orchestrator | 2026-01-05 01:25:26.630473 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-05 01:25:26.630495 | orchestrator | Monday 05 January 2026 01:25:16 +0000 (0:00:01.703) 0:09:51.333 ******** 2026-01-05 01:25:26.630509 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:25:26.630522 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:25:26.630533 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:25:26.630546 | orchestrator | 2026-01-05 01:25:26.630559 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-01-05 01:25:26.630572 | orchestrator | Monday 05 January 2026 01:25:17 +0000 (0:00:01.012) 0:09:52.345 ******** 2026-01-05 01:25:26.630585 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:25:26.630596 | orchestrator | 2026-01-05 01:25:26.630605 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-01-05 01:25:26.630613 | orchestrator | Monday 05 January 2026 01:25:18 +0000 (0:00:00.586) 0:09:52.932 ******** 2026-01-05 01:25:26.630620 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:25:26.630628 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:25:26.630636 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:25:26.630644 | orchestrator | 2026-01-05 01:25:26.630652 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-01-05 01:25:26.630660 | orchestrator | Monday 05 January 2026 01:25:18 +0000 (0:00:00.365) 0:09:53.297 ******** 2026-01-05 01:25:26.630668 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:25:26.630675 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:25:26.630683 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:25:26.630691 | orchestrator | 2026-01-05 01:25:26.630699 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-01-05 01:25:26.630706 | orchestrator | Monday 05 January 2026 01:25:20 +0000 (0:00:01.576) 0:09:54.874 ******** 2026-01-05 01:25:26.630714 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 01:25:26.630722 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 01:25:26.630739 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 01:25:26.630746 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:25:26.630754 | orchestrator | 2026-01-05 01:25:26.630762 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-01-05 01:25:26.630770 | orchestrator | Monday 05 January 2026 01:25:20 +0000 (0:00:00.683) 0:09:55.558 ******** 2026-01-05 01:25:26.630778 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:25:26.630785 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:25:26.630793 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:25:26.630801 | orchestrator | 2026-01-05 01:25:26.630816 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-01-05 01:25:26.630824 | orchestrator | 2026-01-05 01:25:26.630832 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-05 01:25:26.630840 | orchestrator | Monday 05 January 2026 01:25:21 +0000 (0:00:00.603) 0:09:56.161 ******** 2026-01-05 01:25:26.630849 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:25:26.630859 | orchestrator | 2026-01-05 01:25:26.630867 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-05 01:25:26.630875 | orchestrator | Monday 05 January 2026 01:25:22 +0000 (0:00:00.817) 0:09:56.978 ******** 2026-01-05 01:25:26.630882 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:25:26.630890 | orchestrator | 2026-01-05 01:25:26.630898 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-05 01:25:26.630906 | orchestrator | Monday 05 January 2026 01:25:22 +0000 (0:00:00.567) 0:09:57.546 ******** 2026-01-05 01:25:26.630914 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:25:26.630921 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:25:26.630929 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:25:26.630937 | orchestrator | 2026-01-05 01:25:26.630945 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-05 01:25:26.630953 | orchestrator | Monday 05 January 2026 01:25:23 +0000 (0:00:00.593) 0:09:58.139 ******** 2026-01-05 01:25:26.630960 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:25:26.630968 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:25:26.630976 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:25:26.630984 | orchestrator | 2026-01-05 01:25:26.630992 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-05 01:25:26.630999 | orchestrator | Monday 05 January 2026 01:25:24 +0000 (0:00:00.770) 0:09:58.910 ******** 2026-01-05 01:25:26.631007 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:25:26.631015 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:25:26.631023 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:25:26.631031 | orchestrator | 2026-01-05 01:25:26.631038 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-05 01:25:26.631046 | orchestrator | Monday 05 January 2026 01:25:24 +0000 (0:00:00.789) 0:09:59.699 ******** 2026-01-05 01:25:26.631054 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:25:26.631062 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:25:26.631069 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:25:26.631077 | orchestrator | 2026-01-05 01:25:26.631085 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-05 01:25:26.631093 | orchestrator | Monday 05 January 2026 01:25:25 +0000 (0:00:00.749) 0:10:00.449 ******** 2026-01-05 01:25:26.631100 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:25:26.631108 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:25:26.631116 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:25:26.631124 | orchestrator | 2026-01-05 01:25:26.631132 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-05 01:25:26.631140 | orchestrator | Monday 05 January 2026 01:25:26 +0000 (0:00:00.637) 0:10:01.086 ******** 2026-01-05 01:25:26.631147 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:25:26.631160 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:25:26.631168 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:25:26.631176 | orchestrator | 2026-01-05 01:25:26.631184 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-05 01:25:26.631199 | orchestrator | Monday 05 January 2026 01:25:26 +0000 (0:00:00.349) 0:10:01.436 ******** 2026-01-05 01:25:50.613051 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:25:50.613162 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:25:50.613174 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:25:50.613183 | orchestrator | 2026-01-05 01:25:50.613192 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-05 01:25:50.613202 | orchestrator | Monday 05 January 2026 01:25:26 +0000 (0:00:00.347) 0:10:01.784 ******** 2026-01-05 01:25:50.613211 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:25:50.613220 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:25:50.613249 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:25:50.613257 | orchestrator | 2026-01-05 01:25:50.613265 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-05 01:25:50.613272 | orchestrator | Monday 05 January 2026 01:25:27 +0000 (0:00:00.807) 0:10:02.591 ******** 2026-01-05 01:25:50.613281 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:25:50.613288 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:25:50.613296 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:25:50.613303 | orchestrator | 2026-01-05 01:25:50.613311 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-05 01:25:50.613319 | orchestrator | Monday 05 January 2026 01:25:28 +0000 (0:00:01.101) 0:10:03.693 ******** 2026-01-05 01:25:50.613327 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:25:50.613334 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:25:50.613342 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:25:50.613349 | orchestrator | 2026-01-05 01:25:50.613357 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-05 01:25:50.613365 | orchestrator | Monday 05 January 2026 01:25:29 +0000 (0:00:00.354) 0:10:04.048 ******** 2026-01-05 01:25:50.613372 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:25:50.613379 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:25:50.613387 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:25:50.613395 | orchestrator | 2026-01-05 01:25:50.613403 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-05 01:25:50.613411 | orchestrator | Monday 05 January 2026 01:25:29 +0000 (0:00:00.348) 0:10:04.396 ******** 2026-01-05 01:25:50.613418 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:25:50.613426 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:25:50.613433 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:25:50.613441 | orchestrator | 2026-01-05 01:25:50.613520 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-05 01:25:50.613529 | orchestrator | Monday 05 January 2026 01:25:29 +0000 (0:00:00.351) 0:10:04.747 ******** 2026-01-05 01:25:50.613537 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:25:50.613561 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:25:50.613569 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:25:50.613577 | orchestrator | 2026-01-05 01:25:50.613585 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-05 01:25:50.613593 | orchestrator | Monday 05 January 2026 01:25:30 +0000 (0:00:00.669) 0:10:05.417 ******** 2026-01-05 01:25:50.613600 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:25:50.613607 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:25:50.613615 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:25:50.613622 | orchestrator | 2026-01-05 01:25:50.613630 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-05 01:25:50.613638 | orchestrator | Monday 05 January 2026 01:25:30 +0000 (0:00:00.360) 0:10:05.777 ******** 2026-01-05 01:25:50.613645 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:25:50.613653 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:25:50.613661 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:25:50.613690 | orchestrator | 2026-01-05 01:25:50.613698 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-05 01:25:50.613706 | orchestrator | Monday 05 January 2026 01:25:31 +0000 (0:00:00.353) 0:10:06.130 ******** 2026-01-05 01:25:50.613713 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:25:50.613721 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:25:50.613728 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:25:50.613735 | orchestrator | 2026-01-05 01:25:50.613743 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-05 01:25:50.613751 | orchestrator | Monday 05 January 2026 01:25:31 +0000 (0:00:00.625) 0:10:06.755 ******** 2026-01-05 01:25:50.613759 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:25:50.613767 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:25:50.613775 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:25:50.613782 | orchestrator | 2026-01-05 01:25:50.613790 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-05 01:25:50.613797 | orchestrator | Monday 05 January 2026 01:25:32 +0000 (0:00:00.340) 0:10:07.096 ******** 2026-01-05 01:25:50.613805 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:25:50.613812 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:25:50.613819 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:25:50.613827 | orchestrator | 2026-01-05 01:25:50.613835 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-05 01:25:50.613842 | orchestrator | Monday 05 January 2026 01:25:32 +0000 (0:00:00.376) 0:10:07.472 ******** 2026-01-05 01:25:50.613850 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:25:50.613857 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:25:50.613865 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:25:50.613872 | orchestrator | 2026-01-05 01:25:50.613880 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-01-05 01:25:50.613888 | orchestrator | Monday 05 January 2026 01:25:33 +0000 (0:00:00.861) 0:10:08.333 ******** 2026-01-05 01:25:50.613897 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:25:50.613906 | orchestrator | 2026-01-05 01:25:50.613914 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-01-05 01:25:50.613921 | orchestrator | Monday 05 January 2026 01:25:34 +0000 (0:00:00.592) 0:10:08.925 ******** 2026-01-05 01:25:50.613929 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 01:25:50.613937 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-05 01:25:50.613945 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-05 01:25:50.613952 | orchestrator | 2026-01-05 01:25:50.613960 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-01-05 01:25:50.613988 | orchestrator | Monday 05 January 2026 01:25:36 +0000 (0:00:02.224) 0:10:11.150 ******** 2026-01-05 01:25:50.613995 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-05 01:25:50.614002 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-05 01:25:50.614009 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:25:50.614078 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-05 01:25:50.614104 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-05 01:25:50.614112 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:25:50.614120 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-05 01:25:50.614127 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-05 01:25:50.614135 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:25:50.614143 | orchestrator | 2026-01-05 01:25:50.614150 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-01-05 01:25:50.614157 | orchestrator | Monday 05 January 2026 01:25:37 +0000 (0:00:01.358) 0:10:12.509 ******** 2026-01-05 01:25:50.614164 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:25:50.614171 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:25:50.614177 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:25:50.614193 | orchestrator | 2026-01-05 01:25:50.614201 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-01-05 01:25:50.614208 | orchestrator | Monday 05 January 2026 01:25:38 +0000 (0:00:00.603) 0:10:13.112 ******** 2026-01-05 01:25:50.614214 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:25:50.614222 | orchestrator | 2026-01-05 01:25:50.614229 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-01-05 01:25:50.614236 | orchestrator | Monday 05 January 2026 01:25:38 +0000 (0:00:00.576) 0:10:13.688 ******** 2026-01-05 01:25:50.614246 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-05 01:25:50.614255 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-05 01:25:50.614268 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-05 01:25:50.614275 | orchestrator | 2026-01-05 01:25:50.614281 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-01-05 01:25:50.614287 | orchestrator | Monday 05 January 2026 01:25:40 +0000 (0:00:01.174) 0:10:14.863 ******** 2026-01-05 01:25:50.614294 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 01:25:50.614300 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-05 01:25:50.614307 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 01:25:50.614313 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-05 01:25:50.614319 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 01:25:50.614326 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-05 01:25:50.614332 | orchestrator | 2026-01-05 01:25:50.614338 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-01-05 01:25:50.614344 | orchestrator | Monday 05 January 2026 01:25:45 +0000 (0:00:05.105) 0:10:19.968 ******** 2026-01-05 01:25:50.614350 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 01:25:50.614356 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-05 01:25:50.614362 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 01:25:50.614368 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-05 01:25:50.614374 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 01:25:50.614380 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-05 01:25:50.614386 | orchestrator | 2026-01-05 01:25:50.614392 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-01-05 01:25:50.614398 | orchestrator | Monday 05 January 2026 01:25:47 +0000 (0:00:02.643) 0:10:22.611 ******** 2026-01-05 01:25:50.614405 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-05 01:25:50.614411 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:25:50.614417 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-05 01:25:50.614424 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:25:50.614430 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-05 01:25:50.614437 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:25:50.614443 | orchestrator | 2026-01-05 01:25:50.614479 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-01-05 01:25:50.614486 | orchestrator | Monday 05 January 2026 01:25:49 +0000 (0:00:01.338) 0:10:23.949 ******** 2026-01-05 01:25:50.614593 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-01-05 01:25:50.614601 | orchestrator | 2026-01-05 01:25:50.614608 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-01-05 01:25:50.614615 | orchestrator | Monday 05 January 2026 01:25:49 +0000 (0:00:00.266) 0:10:24.216 ******** 2026-01-05 01:25:50.614635 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-05 01:26:36.048779 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-05 01:26:36.048899 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-05 01:26:36.048911 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-05 01:26:36.048919 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-05 01:26:36.048926 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:26:36.048934 | orchestrator | 2026-01-05 01:26:36.048942 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-01-05 01:26:36.048950 | orchestrator | Monday 05 January 2026 01:25:50 +0000 (0:00:01.203) 0:10:25.420 ******** 2026-01-05 01:26:36.048958 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-05 01:26:36.048964 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-05 01:26:36.048971 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-05 01:26:36.048978 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-05 01:26:36.048983 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-05 01:26:36.048988 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:26:36.048992 | orchestrator | 2026-01-05 01:26:36.048996 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-01-05 01:26:36.049014 | orchestrator | Monday 05 January 2026 01:25:51 +0000 (0:00:00.672) 0:10:26.093 ******** 2026-01-05 01:26:36.049019 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-05 01:26:36.049024 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-05 01:26:36.049028 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-05 01:26:36.049033 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-05 01:26:36.049037 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-05 01:26:36.049041 | orchestrator | 2026-01-05 01:26:36.049045 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-01-05 01:26:36.049049 | orchestrator | Monday 05 January 2026 01:26:22 +0000 (0:00:31.068) 0:10:57.161 ******** 2026-01-05 01:26:36.049053 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:26:36.049058 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:26:36.049078 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:26:36.049083 | orchestrator | 2026-01-05 01:26:36.049087 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-01-05 01:26:36.049091 | orchestrator | Monday 05 January 2026 01:26:22 +0000 (0:00:00.349) 0:10:57.511 ******** 2026-01-05 01:26:36.049095 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:26:36.049099 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:26:36.049102 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:26:36.049106 | orchestrator | 2026-01-05 01:26:36.049110 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-01-05 01:26:36.049114 | orchestrator | Monday 05 January 2026 01:26:23 +0000 (0:00:00.354) 0:10:57.865 ******** 2026-01-05 01:26:36.049119 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:26:36.049123 | orchestrator | 2026-01-05 01:26:36.049127 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-01-05 01:26:36.049131 | orchestrator | Monday 05 January 2026 01:26:23 +0000 (0:00:00.863) 0:10:58.729 ******** 2026-01-05 01:26:36.049135 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:26:36.049139 | orchestrator | 2026-01-05 01:26:36.049143 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-01-05 01:26:36.049147 | orchestrator | Monday 05 January 2026 01:26:24 +0000 (0:00:00.622) 0:10:59.352 ******** 2026-01-05 01:26:36.049151 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:26:36.049155 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:26:36.049159 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:26:36.049163 | orchestrator | 2026-01-05 01:26:36.049167 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-01-05 01:26:36.049171 | orchestrator | Monday 05 January 2026 01:26:26 +0000 (0:00:02.024) 0:11:01.376 ******** 2026-01-05 01:26:36.049175 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:26:36.049193 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:26:36.049197 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:26:36.049201 | orchestrator | 2026-01-05 01:26:36.049206 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-01-05 01:26:36.049212 | orchestrator | Monday 05 January 2026 01:26:27 +0000 (0:00:01.284) 0:11:02.661 ******** 2026-01-05 01:26:36.049217 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:26:36.049225 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:26:36.049234 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:26:36.049242 | orchestrator | 2026-01-05 01:26:36.049248 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-01-05 01:26:36.049254 | orchestrator | Monday 05 January 2026 01:26:29 +0000 (0:00:02.001) 0:11:04.663 ******** 2026-01-05 01:26:36.049260 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-05 01:26:36.049266 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-05 01:26:36.049272 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-05 01:26:36.049278 | orchestrator | 2026-01-05 01:26:36.049284 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-05 01:26:36.049291 | orchestrator | Monday 05 January 2026 01:26:32 +0000 (0:00:02.936) 0:11:07.600 ******** 2026-01-05 01:26:36.049297 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:26:36.049304 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:26:36.049311 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:26:36.049318 | orchestrator | 2026-01-05 01:26:36.049324 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-01-05 01:26:36.049331 | orchestrator | Monday 05 January 2026 01:26:33 +0000 (0:00:00.380) 0:11:07.980 ******** 2026-01-05 01:26:36.049344 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:26:36.049351 | orchestrator | 2026-01-05 01:26:36.049357 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-01-05 01:26:36.049370 | orchestrator | Monday 05 January 2026 01:26:34 +0000 (0:00:00.856) 0:11:08.837 ******** 2026-01-05 01:26:36.049377 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:26:36.049383 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:26:36.049388 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:26:36.049393 | orchestrator | 2026-01-05 01:26:36.049398 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-01-05 01:26:36.049402 | orchestrator | Monday 05 January 2026 01:26:34 +0000 (0:00:00.387) 0:11:09.225 ******** 2026-01-05 01:26:36.049408 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:26:36.049413 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:26:36.049417 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:26:36.049422 | orchestrator | 2026-01-05 01:26:36.049427 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-01-05 01:26:36.049431 | orchestrator | Monday 05 January 2026 01:26:34 +0000 (0:00:00.377) 0:11:09.603 ******** 2026-01-05 01:26:36.049436 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 01:26:36.049441 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 01:26:36.049448 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 01:26:36.049456 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:26:36.049465 | orchestrator | 2026-01-05 01:26:36.049472 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-01-05 01:26:36.049478 | orchestrator | Monday 05 January 2026 01:26:35 +0000 (0:00:00.997) 0:11:10.601 ******** 2026-01-05 01:26:36.049484 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:26:36.049491 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:26:36.049497 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:26:36.049502 | orchestrator | 2026-01-05 01:26:36.049508 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:26:36.049515 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-01-05 01:26:36.049523 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-01-05 01:26:36.049529 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-01-05 01:26:36.049536 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-01-05 01:26:36.049542 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-01-05 01:26:36.049549 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-01-05 01:26:36.049555 | orchestrator | 2026-01-05 01:26:36.049562 | orchestrator | 2026-01-05 01:26:36.049568 | orchestrator | 2026-01-05 01:26:36.049575 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:26:36.049581 | orchestrator | Monday 05 January 2026 01:26:36 +0000 (0:00:00.237) 0:11:10.838 ******** 2026-01-05 01:26:36.049588 | orchestrator | =============================================================================== 2026-01-05 01:26:36.049595 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 50.99s 2026-01-05 01:26:36.049610 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 39.15s 2026-01-05 01:26:36.657932 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 36.62s 2026-01-05 01:26:36.658143 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.07s 2026-01-05 01:26:36.658162 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.12s 2026-01-05 01:26:36.658171 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.73s 2026-01-05 01:26:36.658186 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.59s 2026-01-05 01:26:36.658194 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 11.38s 2026-01-05 01:26:36.658201 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.34s 2026-01-05 01:26:36.658208 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.56s 2026-01-05 01:26:36.658215 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.97s 2026-01-05 01:26:36.658222 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.46s 2026-01-05 01:26:36.658230 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.29s 2026-01-05 01:26:36.658237 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 5.11s 2026-01-05 01:26:36.658244 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 4.21s 2026-01-05 01:26:36.658251 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.20s 2026-01-05 01:26:36.658258 | orchestrator | ceph-container-common : Get ceph version -------------------------------- 4.20s 2026-01-05 01:26:36.658275 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 4.07s 2026-01-05 01:26:36.658291 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.81s 2026-01-05 01:26:36.658299 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.72s 2026-01-05 01:26:39.109615 | orchestrator | 2026-01-05 01:26:39 | INFO  | Task f8be5bce-ce7d-49a5-935c-4c2ac839d140 (ceph-pools) was prepared for execution. 2026-01-05 01:26:39.109747 | orchestrator | 2026-01-05 01:26:39 | INFO  | It takes a moment until task f8be5bce-ce7d-49a5-935c-4c2ac839d140 (ceph-pools) has been started and output is visible here. 2026-01-05 01:26:53.800720 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-05 01:26:53.800832 | orchestrator | 2.16.14 2026-01-05 01:26:53.800850 | orchestrator | 2026-01-05 01:26:53.800862 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-01-05 01:26:53.800873 | orchestrator | 2026-01-05 01:26:53.800880 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-05 01:26:53.800887 | orchestrator | Monday 05 January 2026 01:26:43 +0000 (0:00:00.633) 0:00:00.633 ******** 2026-01-05 01:26:53.800893 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:26:53.800900 | orchestrator | 2026-01-05 01:26:53.800907 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-01-05 01:26:53.800914 | orchestrator | Monday 05 January 2026 01:26:44 +0000 (0:00:00.666) 0:00:01.299 ******** 2026-01-05 01:26:53.800920 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:26:53.800926 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:26:53.800932 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:26:53.800937 | orchestrator | 2026-01-05 01:26:53.800943 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-01-05 01:26:53.800949 | orchestrator | Monday 05 January 2026 01:26:45 +0000 (0:00:00.681) 0:00:01.981 ******** 2026-01-05 01:26:53.800955 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:26:53.800961 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:26:53.800967 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:26:53.800972 | orchestrator | 2026-01-05 01:26:53.800978 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-05 01:26:53.800984 | orchestrator | Monday 05 January 2026 01:26:45 +0000 (0:00:00.340) 0:00:02.321 ******** 2026-01-05 01:26:53.800990 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:26:53.801015 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:26:53.801021 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:26:53.801027 | orchestrator | 2026-01-05 01:26:53.801033 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-05 01:26:53.801039 | orchestrator | Monday 05 January 2026 01:26:46 +0000 (0:00:00.977) 0:00:03.299 ******** 2026-01-05 01:26:53.801044 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:26:53.801050 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:26:53.801056 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:26:53.801062 | orchestrator | 2026-01-05 01:26:53.801068 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-01-05 01:26:53.801073 | orchestrator | Monday 05 January 2026 01:26:46 +0000 (0:00:00.331) 0:00:03.631 ******** 2026-01-05 01:26:53.801079 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:26:53.801085 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:26:53.801090 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:26:53.801096 | orchestrator | 2026-01-05 01:26:53.801102 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-01-05 01:26:53.801108 | orchestrator | Monday 05 January 2026 01:26:47 +0000 (0:00:00.345) 0:00:03.976 ******** 2026-01-05 01:26:53.801113 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:26:53.801119 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:26:53.801125 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:26:53.801130 | orchestrator | 2026-01-05 01:26:53.801136 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-01-05 01:26:53.801142 | orchestrator | Monday 05 January 2026 01:26:47 +0000 (0:00:00.539) 0:00:04.516 ******** 2026-01-05 01:26:53.801148 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:26:53.801155 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:26:53.801161 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:26:53.801166 | orchestrator | 2026-01-05 01:26:53.801172 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-01-05 01:26:53.801178 | orchestrator | Monday 05 January 2026 01:26:47 +0000 (0:00:00.323) 0:00:04.839 ******** 2026-01-05 01:26:53.801184 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:26:53.801189 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:26:53.801195 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:26:53.801201 | orchestrator | 2026-01-05 01:26:53.801207 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-01-05 01:26:53.801213 | orchestrator | Monday 05 January 2026 01:26:48 +0000 (0:00:00.317) 0:00:05.157 ******** 2026-01-05 01:26:53.801218 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-05 01:26:53.801234 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-05 01:26:53.801241 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-05 01:26:53.801255 | orchestrator | 2026-01-05 01:26:53.801262 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-01-05 01:26:53.801269 | orchestrator | Monday 05 January 2026 01:26:49 +0000 (0:00:00.875) 0:00:06.033 ******** 2026-01-05 01:26:53.801275 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:26:53.801282 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:26:53.801288 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:26:53.801295 | orchestrator | 2026-01-05 01:26:53.801302 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-01-05 01:26:53.801308 | orchestrator | Monday 05 January 2026 01:26:49 +0000 (0:00:00.453) 0:00:06.486 ******** 2026-01-05 01:26:53.801315 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-05 01:26:53.801322 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-05 01:26:53.801329 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-05 01:26:53.801336 | orchestrator | 2026-01-05 01:26:53.801341 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-01-05 01:26:53.801352 | orchestrator | Monday 05 January 2026 01:26:51 +0000 (0:00:02.298) 0:00:08.785 ******** 2026-01-05 01:26:53.801358 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-05 01:26:53.801376 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-05 01:26:53.801382 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-05 01:26:53.801388 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:26:53.801393 | orchestrator | 2026-01-05 01:26:53.801413 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-01-05 01:26:53.801420 | orchestrator | Monday 05 January 2026 01:26:52 +0000 (0:00:00.917) 0:00:09.702 ******** 2026-01-05 01:26:53.801428 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-05 01:26:53.801436 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-05 01:26:53.801442 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-05 01:26:53.801448 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:26:53.801454 | orchestrator | 2026-01-05 01:26:53.801459 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-01-05 01:26:53.801465 | orchestrator | Monday 05 January 2026 01:26:53 +0000 (0:00:00.691) 0:00:10.394 ******** 2026-01-05 01:26:53.801473 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-05 01:26:53.801481 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-05 01:26:53.801487 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-05 01:26:53.801493 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:26:53.801499 | orchestrator | 2026-01-05 01:26:53.801505 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-01-05 01:26:53.801511 | orchestrator | Monday 05 January 2026 01:26:53 +0000 (0:00:00.173) 0:00:10.568 ******** 2026-01-05 01:26:53.801519 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'c181a7e52b5d', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-01-05 01:26:50.440304', 'end': '2026-01-05 01:26:50.477048', 'delta': '0:00:00.036744', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c181a7e52b5d'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-01-05 01:26:53.801536 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '8220df20b331', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-01-05 01:26:51.063900', 'end': '2026-01-05 01:26:51.107429', 'delta': '0:00:00.043529', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['8220df20b331'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-01-05 01:26:53.801547 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '62bad87f6045', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-01-05 01:26:51.631767', 'end': '2026-01-05 01:26:51.670119', 'delta': '0:00:00.038352', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['62bad87f6045'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-01-05 01:27:00.946905 | orchestrator | 2026-01-05 01:27:00.947007 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-01-05 01:27:00.947019 | orchestrator | Monday 05 January 2026 01:26:53 +0000 (0:00:00.197) 0:00:10.765 ******** 2026-01-05 01:27:00.947027 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:27:00.947035 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:27:00.947042 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:27:00.947048 | orchestrator | 2026-01-05 01:27:00.947055 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-01-05 01:27:00.947063 | orchestrator | Monday 05 January 2026 01:26:54 +0000 (0:00:00.567) 0:00:11.332 ******** 2026-01-05 01:27:00.947071 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-01-05 01:27:00.947078 | orchestrator | 2026-01-05 01:27:00.947084 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-01-05 01:27:00.947091 | orchestrator | Monday 05 January 2026 01:26:56 +0000 (0:00:01.875) 0:00:13.207 ******** 2026-01-05 01:27:00.947097 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:27:00.947103 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:27:00.947109 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:27:00.947114 | orchestrator | 2026-01-05 01:27:00.947120 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-01-05 01:27:00.947127 | orchestrator | Monday 05 January 2026 01:26:56 +0000 (0:00:00.312) 0:00:13.520 ******** 2026-01-05 01:27:00.947134 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:27:00.947140 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:27:00.947146 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:27:00.947152 | orchestrator | 2026-01-05 01:27:00.947158 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-05 01:27:00.947164 | orchestrator | Monday 05 January 2026 01:26:57 +0000 (0:00:00.659) 0:00:14.179 ******** 2026-01-05 01:27:00.947171 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:27:00.947177 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:27:00.947183 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:27:00.947189 | orchestrator | 2026-01-05 01:27:00.947196 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-01-05 01:27:00.947225 | orchestrator | Monday 05 January 2026 01:26:57 +0000 (0:00:00.310) 0:00:14.490 ******** 2026-01-05 01:27:00.947232 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:27:00.947238 | orchestrator | 2026-01-05 01:27:00.947245 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-01-05 01:27:00.947253 | orchestrator | Monday 05 January 2026 01:26:57 +0000 (0:00:00.155) 0:00:14.645 ******** 2026-01-05 01:27:00.947259 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:27:00.947265 | orchestrator | 2026-01-05 01:27:00.947271 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-05 01:27:00.947277 | orchestrator | Monday 05 January 2026 01:26:57 +0000 (0:00:00.234) 0:00:14.880 ******** 2026-01-05 01:27:00.947283 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:27:00.947289 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:27:00.947295 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:27:00.947301 | orchestrator | 2026-01-05 01:27:00.947308 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-01-05 01:27:00.947314 | orchestrator | Monday 05 January 2026 01:26:58 +0000 (0:00:00.312) 0:00:15.193 ******** 2026-01-05 01:27:00.947320 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:27:00.947326 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:27:00.947332 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:27:00.947338 | orchestrator | 2026-01-05 01:27:00.947343 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-01-05 01:27:00.947352 | orchestrator | Monday 05 January 2026 01:26:58 +0000 (0:00:00.546) 0:00:15.739 ******** 2026-01-05 01:27:00.947361 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:27:00.947367 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:27:00.947372 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:27:00.947377 | orchestrator | 2026-01-05 01:27:00.947383 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-01-05 01:27:00.947389 | orchestrator | Monday 05 January 2026 01:26:59 +0000 (0:00:00.334) 0:00:16.074 ******** 2026-01-05 01:27:00.947394 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:27:00.947400 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:27:00.947406 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:27:00.947412 | orchestrator | 2026-01-05 01:27:00.947417 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-01-05 01:27:00.947423 | orchestrator | Monday 05 January 2026 01:26:59 +0000 (0:00:00.328) 0:00:16.402 ******** 2026-01-05 01:27:00.947428 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:27:00.947434 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:27:00.947444 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:27:00.947452 | orchestrator | 2026-01-05 01:27:00.947459 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-01-05 01:27:00.947480 | orchestrator | Monday 05 January 2026 01:26:59 +0000 (0:00:00.349) 0:00:16.752 ******** 2026-01-05 01:27:00.947486 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:27:00.947492 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:27:00.947499 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:27:00.947505 | orchestrator | 2026-01-05 01:27:00.947512 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-01-05 01:27:00.947523 | orchestrator | Monday 05 January 2026 01:27:00 +0000 (0:00:00.584) 0:00:17.336 ******** 2026-01-05 01:27:00.947529 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:27:00.947534 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:27:00.947540 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:27:00.947547 | orchestrator | 2026-01-05 01:27:00.947553 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-01-05 01:27:00.947559 | orchestrator | Monday 05 January 2026 01:27:00 +0000 (0:00:00.365) 0:00:17.702 ******** 2026-01-05 01:27:00.947590 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9b63b326--8bb9--546b--aabb--a628fef076ec-osd--block--9b63b326--8bb9--546b--aabb--a628fef076ec', 'dm-uuid-LVM-aax8Lv27NCCQjPi1qio1vJTPmq4Z2c3GNKnBnMAGF0tJqvI6sSs3evnn2KUDak0C'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-05 01:27:00.947612 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b6ae7fca--c2f2--5e20--af6f--426bd4b4cc4c-osd--block--b6ae7fca--c2f2--5e20--af6f--426bd4b4cc4c', 'dm-uuid-LVM-YlhFCSURoBNX3OX3YiXj0O0Zc8T7SdkQ6cGUFgLwbPE7lg60PLZeAg8gCNHqABZF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-05 01:27:00.947620 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:27:00.947630 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:27:00.947637 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:27:00.947643 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:27:00.947650 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:27:00.947661 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:27:00.947667 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:27:00.947686 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:27:00.998690 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--cc420972--ce44--5a44--a5a6--a707e77471c5-osd--block--cc420972--ce44--5a44--a5a6--a707e77471c5', 'dm-uuid-LVM-5y6etqdOybvwL8SKpqd9lO6ea8AihF6ogUllt99DApL2987EmbvRNCTuGEj3rZSj'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-05 01:27:00.998817 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b0d1c85-8aad-4201-aadd-214ecf9ccf0b', 'scsi-SQEMU_QEMU_HARDDISK_0b0d1c85-8aad-4201-aadd-214ecf9ccf0b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b0d1c85-8aad-4201-aadd-214ecf9ccf0b-part1', 'scsi-SQEMU_QEMU_HARDDISK_0b0d1c85-8aad-4201-aadd-214ecf9ccf0b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b0d1c85-8aad-4201-aadd-214ecf9ccf0b-part14', 'scsi-SQEMU_QEMU_HARDDISK_0b0d1c85-8aad-4201-aadd-214ecf9ccf0b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b0d1c85-8aad-4201-aadd-214ecf9ccf0b-part15', 'scsi-SQEMU_QEMU_HARDDISK_0b0d1c85-8aad-4201-aadd-214ecf9ccf0b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b0d1c85-8aad-4201-aadd-214ecf9ccf0b-part16', 'scsi-SQEMU_QEMU_HARDDISK_0b0d1c85-8aad-4201-aadd-214ecf9ccf0b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:27:00.998849 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--62cfaa39--e4fc--5ede--b6ae--ee7ea3f2ad3e-osd--block--62cfaa39--e4fc--5ede--b6ae--ee7ea3f2ad3e', 'dm-uuid-LVM-XtuO3kY5fz70u4PVT7kRjLID7YPzDdlaHpmcICK37hKMr5v7VPxurWIVpPi7MnTe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-05 01:27:00.998878 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--9b63b326--8bb9--546b--aabb--a628fef076ec-osd--block--9b63b326--8bb9--546b--aabb--a628fef076ec'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3CXOmo-fjPi-sB7K-8cPd-5gdl-1Eim-RcVjLf', 'scsi-0QEMU_QEMU_HARDDISK_bcde85c0-b124-4268-b34b-cc4a07cfe72d', 'scsi-SQEMU_QEMU_HARDDISK_bcde85c0-b124-4268-b34b-cc4a07cfe72d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:27:00.998909 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:27:00.998921 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b6ae7fca--c2f2--5e20--af6f--426bd4b4cc4c-osd--block--b6ae7fca--c2f2--5e20--af6f--426bd4b4cc4c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-NUQqtM-WhHN-H4hZ-NI7i-o5vB-47E9-oLkBA6', 'scsi-0QEMU_QEMU_HARDDISK_99050707-7ba3-43f8-b640-7ac26fbd844b', 'scsi-SQEMU_QEMU_HARDDISK_99050707-7ba3-43f8-b640-7ac26fbd844b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:27:00.998950 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:27:00.998961 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ca851d29-aa00-48c4-a2d0-a646814f4a41', 'scsi-SQEMU_QEMU_HARDDISK_ca851d29-aa00-48c4-a2d0-a646814f4a41'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:27:00.998970 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:27:00.998985 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-03-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:27:00.999001 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:27:00.999017 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:27:01.118574 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:27:01.118681 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:27:01.118698 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:27:01.118795 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:27:01.118846 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3250b0f8-cf47-4b18-9931-22a1ebe34c49', 'scsi-SQEMU_QEMU_HARDDISK_3250b0f8-cf47-4b18-9931-22a1ebe34c49'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3250b0f8-cf47-4b18-9931-22a1ebe34c49-part1', 'scsi-SQEMU_QEMU_HARDDISK_3250b0f8-cf47-4b18-9931-22a1ebe34c49-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3250b0f8-cf47-4b18-9931-22a1ebe34c49-part14', 'scsi-SQEMU_QEMU_HARDDISK_3250b0f8-cf47-4b18-9931-22a1ebe34c49-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3250b0f8-cf47-4b18-9931-22a1ebe34c49-part15', 'scsi-SQEMU_QEMU_HARDDISK_3250b0f8-cf47-4b18-9931-22a1ebe34c49-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3250b0f8-cf47-4b18-9931-22a1ebe34c49-part16', 'scsi-SQEMU_QEMU_HARDDISK_3250b0f8-cf47-4b18-9931-22a1ebe34c49-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:27:01.118908 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--cc420972--ce44--5a44--a5a6--a707e77471c5-osd--block--cc420972--ce44--5a44--a5a6--a707e77471c5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-NROBWj-Y4k8-CNdZ-qqy0-C4t6-f7oi-swyzyZ', 'scsi-0QEMU_QEMU_HARDDISK_9f2df327-5b12-4442-ac27-592210953f70', 'scsi-SQEMU_QEMU_HARDDISK_9f2df327-5b12-4442-ac27-592210953f70'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:27:01.118925 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--62cfaa39--e4fc--5ede--b6ae--ee7ea3f2ad3e-osd--block--62cfaa39--e4fc--5ede--b6ae--ee7ea3f2ad3e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-humsa4-KTgB-CbrW-Qmdc-6Kz2-zs64-ZjfIZb', 'scsi-0QEMU_QEMU_HARDDISK_ead21d4d-eccd-4cd4-b0bf-ce9a2f7ae522', 'scsi-SQEMU_QEMU_HARDDISK_ead21d4d-eccd-4cd4-b0bf-ce9a2f7ae522'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:27:01.118938 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e0b145f-2bfd-4824-bc37-4d4082c6f3f3', 'scsi-SQEMU_QEMU_HARDDISK_6e0b145f-2bfd-4824-bc37-4d4082c6f3f3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:27:01.118950 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-03-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:27:01.118963 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--13a82a55--1430--5b0a--a1a4--baa9d6ca4414-osd--block--13a82a55--1430--5b0a--a1a4--baa9d6ca4414', 'dm-uuid-LVM-N6IGAFcTK4f0RIoIL68bIa5oeOtjeq5VPt3zysJ6uusfwuUnDTnTWFIlh4KrifZL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-05 01:27:01.118975 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:27:01.118993 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--124df3d1--788c--586c--b42c--9b6f84a90775-osd--block--124df3d1--788c--586c--b42c--9b6f84a90775', 'dm-uuid-LVM-esS8nIABj2XOT7SZaVlhCHBSO01PfHEXG2YstjcMIJDQ3Sk02xtXf1d3vB4hoV1I'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-05 01:27:01.119013 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:27:01.119033 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:27:01.411031 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:27:01.411165 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:27:01.411192 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:27:01.411212 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:27:01.411232 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:27:01.411251 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-05 01:27:01.411378 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_994d72d0-f7fa-4ba3-8a27-05b8bd26fa8b', 'scsi-SQEMU_QEMU_HARDDISK_994d72d0-f7fa-4ba3-8a27-05b8bd26fa8b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_994d72d0-f7fa-4ba3-8a27-05b8bd26fa8b-part1', 'scsi-SQEMU_QEMU_HARDDISK_994d72d0-f7fa-4ba3-8a27-05b8bd26fa8b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_994d72d0-f7fa-4ba3-8a27-05b8bd26fa8b-part14', 'scsi-SQEMU_QEMU_HARDDISK_994d72d0-f7fa-4ba3-8a27-05b8bd26fa8b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_994d72d0-f7fa-4ba3-8a27-05b8bd26fa8b-part15', 'scsi-SQEMU_QEMU_HARDDISK_994d72d0-f7fa-4ba3-8a27-05b8bd26fa8b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_994d72d0-f7fa-4ba3-8a27-05b8bd26fa8b-part16', 'scsi-SQEMU_QEMU_HARDDISK_994d72d0-f7fa-4ba3-8a27-05b8bd26fa8b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:27:01.411403 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--13a82a55--1430--5b0a--a1a4--baa9d6ca4414-osd--block--13a82a55--1430--5b0a--a1a4--baa9d6ca4414'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WJQI6q-2zyD-20jq-Oozc-DwSo-tfBz-Pdgnig', 'scsi-0QEMU_QEMU_HARDDISK_09f09123-b92e-4af4-8119-7d25e215193b', 'scsi-SQEMU_QEMU_HARDDISK_09f09123-b92e-4af4-8119-7d25e215193b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:27:01.411418 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--124df3d1--788c--586c--b42c--9b6f84a90775-osd--block--124df3d1--788c--586c--b42c--9b6f84a90775'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-sOK4iT-Id2D-YriF-inFH-U9JU-3Vhu-dR9bqV', 'scsi-0QEMU_QEMU_HARDDISK_1d3cc069-e4cd-473c-8ec3-e2e615e111a0', 'scsi-SQEMU_QEMU_HARDDISK_1d3cc069-e4cd-473c-8ec3-e2e615e111a0'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:27:01.411431 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f88ade1-67f9-419a-b69f-9c70a1e62aa2', 'scsi-SQEMU_QEMU_HARDDISK_6f88ade1-67f9-419a-b69f-9c70a1e62aa2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:27:01.411460 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-03-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-05 01:27:01.411475 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:27:01.411488 | orchestrator | 2026-01-05 01:27:01.411500 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-01-05 01:27:01.411513 | orchestrator | Monday 05 January 2026 01:27:01 +0000 (0:00:00.573) 0:00:18.276 ******** 2026-01-05 01:27:01.411538 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9b63b326--8bb9--546b--aabb--a628fef076ec-osd--block--9b63b326--8bb9--546b--aabb--a628fef076ec', 'dm-uuid-LVM-aax8Lv27NCCQjPi1qio1vJTPmq4Z2c3GNKnBnMAGF0tJqvI6sSs3evnn2KUDak0C'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:27:01.526637 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b6ae7fca--c2f2--5e20--af6f--426bd4b4cc4c-osd--block--b6ae7fca--c2f2--5e20--af6f--426bd4b4cc4c', 'dm-uuid-LVM-YlhFCSURoBNX3OX3YiXj0O0Zc8T7SdkQ6cGUFgLwbPE7lg60PLZeAg8gCNHqABZF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:27:01.526798 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:27:01.526828 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:27:01.526875 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:27:01.526952 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:27:01.526972 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:27:01.527018 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:27:01.527037 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:27:01.527054 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:27:01.527084 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b0d1c85-8aad-4201-aadd-214ecf9ccf0b', 'scsi-SQEMU_QEMU_HARDDISK_0b0d1c85-8aad-4201-aadd-214ecf9ccf0b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b0d1c85-8aad-4201-aadd-214ecf9ccf0b-part1', 'scsi-SQEMU_QEMU_HARDDISK_0b0d1c85-8aad-4201-aadd-214ecf9ccf0b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b0d1c85-8aad-4201-aadd-214ecf9ccf0b-part14', 'scsi-SQEMU_QEMU_HARDDISK_0b0d1c85-8aad-4201-aadd-214ecf9ccf0b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b0d1c85-8aad-4201-aadd-214ecf9ccf0b-part15', 'scsi-SQEMU_QEMU_HARDDISK_0b0d1c85-8aad-4201-aadd-214ecf9ccf0b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b0d1c85-8aad-4201-aadd-214ecf9ccf0b-part16', 'scsi-SQEMU_QEMU_HARDDISK_0b0d1c85-8aad-4201-aadd-214ecf9ccf0b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:27:01.527128 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--cc420972--ce44--5a44--a5a6--a707e77471c5-osd--block--cc420972--ce44--5a44--a5a6--a707e77471c5', 'dm-uuid-LVM-5y6etqdOybvwL8SKpqd9lO6ea8AihF6ogUllt99DApL2987EmbvRNCTuGEj3rZSj'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:27:01.666761 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--9b63b326--8bb9--546b--aabb--a628fef076ec-osd--block--9b63b326--8bb9--546b--aabb--a628fef076ec'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3CXOmo-fjPi-sB7K-8cPd-5gdl-1Eim-RcVjLf', 'scsi-0QEMU_QEMU_HARDDISK_bcde85c0-b124-4268-b34b-cc4a07cfe72d', 'scsi-SQEMU_QEMU_HARDDISK_bcde85c0-b124-4268-b34b-cc4a07cfe72d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:27:01.666865 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--62cfaa39--e4fc--5ede--b6ae--ee7ea3f2ad3e-osd--block--62cfaa39--e4fc--5ede--b6ae--ee7ea3f2ad3e', 'dm-uuid-LVM-XtuO3kY5fz70u4PVT7kRjLID7YPzDdlaHpmcICK37hKMr5v7VPxurWIVpPi7MnTe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:27:01.666954 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b6ae7fca--c2f2--5e20--af6f--426bd4b4cc4c-osd--block--b6ae7fca--c2f2--5e20--af6f--426bd4b4cc4c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-NUQqtM-WhHN-H4hZ-NI7i-o5vB-47E9-oLkBA6', 'scsi-0QEMU_QEMU_HARDDISK_99050707-7ba3-43f8-b640-7ac26fbd844b', 'scsi-SQEMU_QEMU_HARDDISK_99050707-7ba3-43f8-b640-7ac26fbd844b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:27:01.666968 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ca851d29-aa00-48c4-a2d0-a646814f4a41', 'scsi-SQEMU_QEMU_HARDDISK_ca851d29-aa00-48c4-a2d0-a646814f4a41'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:27:01.666997 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:27:01.667011 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-03-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:27:01.667022 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:27:01.667035 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:27:01.667055 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:27:01.667071 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:27:01.667082 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:27:01.667092 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:27:01.667110 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:27:02.034677 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--13a82a55--1430--5b0a--a1a4--baa9d6ca4414-osd--block--13a82a55--1430--5b0a--a1a4--baa9d6ca4414', 'dm-uuid-LVM-N6IGAFcTK4f0RIoIL68bIa5oeOtjeq5VPt3zysJ6uusfwuUnDTnTWFIlh4KrifZL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:27:02.034822 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:27:02.034849 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--124df3d1--788c--586c--b42c--9b6f84a90775-osd--block--124df3d1--788c--586c--b42c--9b6f84a90775', 'dm-uuid-LVM-esS8nIABj2XOT7SZaVlhCHBSO01PfHEXG2YstjcMIJDQ3Sk02xtXf1d3vB4hoV1I'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:27:02.034876 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3250b0f8-cf47-4b18-9931-22a1ebe34c49', 'scsi-SQEMU_QEMU_HARDDISK_3250b0f8-cf47-4b18-9931-22a1ebe34c49'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3250b0f8-cf47-4b18-9931-22a1ebe34c49-part1', 'scsi-SQEMU_QEMU_HARDDISK_3250b0f8-cf47-4b18-9931-22a1ebe34c49-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3250b0f8-cf47-4b18-9931-22a1ebe34c49-part14', 'scsi-SQEMU_QEMU_HARDDISK_3250b0f8-cf47-4b18-9931-22a1ebe34c49-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3250b0f8-cf47-4b18-9931-22a1ebe34c49-part15', 'scsi-SQEMU_QEMU_HARDDISK_3250b0f8-cf47-4b18-9931-22a1ebe34c49-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3250b0f8-cf47-4b18-9931-22a1ebe34c49-part16', 'scsi-SQEMU_QEMU_HARDDISK_3250b0f8-cf47-4b18-9931-22a1ebe34c49-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:27:02.034892 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--cc420972--ce44--5a44--a5a6--a707e77471c5-osd--block--cc420972--ce44--5a44--a5a6--a707e77471c5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-NROBWj-Y4k8-CNdZ-qqy0-C4t6-f7oi-swyzyZ', 'scsi-0QEMU_QEMU_HARDDISK_9f2df327-5b12-4442-ac27-592210953f70', 'scsi-SQEMU_QEMU_HARDDISK_9f2df327-5b12-4442-ac27-592210953f70'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:27:02.034904 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:27:02.034911 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--62cfaa39--e4fc--5ede--b6ae--ee7ea3f2ad3e-osd--block--62cfaa39--e4fc--5ede--b6ae--ee7ea3f2ad3e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-humsa4-KTgB-CbrW-Qmdc-6Kz2-zs64-ZjfIZb', 'scsi-0QEMU_QEMU_HARDDISK_ead21d4d-eccd-4cd4-b0bf-ce9a2f7ae522', 'scsi-SQEMU_QEMU_HARDDISK_ead21d4d-eccd-4cd4-b0bf-ce9a2f7ae522'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:27:02.034918 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:27:02.034931 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e0b145f-2bfd-4824-bc37-4d4082c6f3f3', 'scsi-SQEMU_QEMU_HARDDISK_6e0b145f-2bfd-4824-bc37-4d4082c6f3f3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:27:02.208008 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:27:02.208106 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-03-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:27:02.208119 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:27:02.208147 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:27:02.208157 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:27:02.208165 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:27:02.208174 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:27:02.208224 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:27:02.208252 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_994d72d0-f7fa-4ba3-8a27-05b8bd26fa8b', 'scsi-SQEMU_QEMU_HARDDISK_994d72d0-f7fa-4ba3-8a27-05b8bd26fa8b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_994d72d0-f7fa-4ba3-8a27-05b8bd26fa8b-part1', 'scsi-SQEMU_QEMU_HARDDISK_994d72d0-f7fa-4ba3-8a27-05b8bd26fa8b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_994d72d0-f7fa-4ba3-8a27-05b8bd26fa8b-part14', 'scsi-SQEMU_QEMU_HARDDISK_994d72d0-f7fa-4ba3-8a27-05b8bd26fa8b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_994d72d0-f7fa-4ba3-8a27-05b8bd26fa8b-part15', 'scsi-SQEMU_QEMU_HARDDISK_994d72d0-f7fa-4ba3-8a27-05b8bd26fa8b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_994d72d0-f7fa-4ba3-8a27-05b8bd26fa8b-part16', 'scsi-SQEMU_QEMU_HARDDISK_994d72d0-f7fa-4ba3-8a27-05b8bd26fa8b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:27:02.208270 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--13a82a55--1430--5b0a--a1a4--baa9d6ca4414-osd--block--13a82a55--1430--5b0a--a1a4--baa9d6ca4414'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WJQI6q-2zyD-20jq-Oozc-DwSo-tfBz-Pdgnig', 'scsi-0QEMU_QEMU_HARDDISK_09f09123-b92e-4af4-8119-7d25e215193b', 'scsi-SQEMU_QEMU_HARDDISK_09f09123-b92e-4af4-8119-7d25e215193b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:27:02.208295 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--124df3d1--788c--586c--b42c--9b6f84a90775-osd--block--124df3d1--788c--586c--b42c--9b6f84a90775'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-sOK4iT-Id2D-YriF-inFH-U9JU-3Vhu-dR9bqV', 'scsi-0QEMU_QEMU_HARDDISK_1d3cc069-e4cd-473c-8ec3-e2e615e111a0', 'scsi-SQEMU_QEMU_HARDDISK_1d3cc069-e4cd-473c-8ec3-e2e615e111a0'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:27:13.537989 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f88ade1-67f9-419a-b69f-9c70a1e62aa2', 'scsi-SQEMU_QEMU_HARDDISK_6f88ade1-67f9-419a-b69f-9c70a1e62aa2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:27:13.538155 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-05-00-03-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-05 01:27:13.538171 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:27:13.538182 | orchestrator | 2026-01-05 01:27:13.538191 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-01-05 01:27:13.538202 | orchestrator | Monday 05 January 2026 01:27:02 +0000 (0:00:00.896) 0:00:19.172 ******** 2026-01-05 01:27:13.538210 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:27:13.538220 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:27:13.538227 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:27:13.538245 | orchestrator | 2026-01-05 01:27:13.538257 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-01-05 01:27:13.538265 | orchestrator | Monday 05 January 2026 01:27:02 +0000 (0:00:00.767) 0:00:19.939 ******** 2026-01-05 01:27:13.538272 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:27:13.538280 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:27:13.538288 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:27:13.538295 | orchestrator | 2026-01-05 01:27:13.538304 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-05 01:27:13.538312 | orchestrator | Monday 05 January 2026 01:27:03 +0000 (0:00:00.317) 0:00:20.257 ******** 2026-01-05 01:27:13.538320 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:27:13.538328 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:27:13.538335 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:27:13.538342 | orchestrator | 2026-01-05 01:27:13.538349 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-05 01:27:13.538373 | orchestrator | Monday 05 January 2026 01:27:03 +0000 (0:00:00.673) 0:00:20.930 ******** 2026-01-05 01:27:13.538381 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:27:13.538389 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:27:13.538397 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:27:13.538404 | orchestrator | 2026-01-05 01:27:13.538412 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-05 01:27:13.538420 | orchestrator | Monday 05 January 2026 01:27:04 +0000 (0:00:00.526) 0:00:21.457 ******** 2026-01-05 01:27:13.538428 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:27:13.538436 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:27:13.538444 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:27:13.538451 | orchestrator | 2026-01-05 01:27:13.538458 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-05 01:27:13.538465 | orchestrator | Monday 05 January 2026 01:27:04 +0000 (0:00:00.469) 0:00:21.927 ******** 2026-01-05 01:27:13.538473 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:27:13.538480 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:27:13.538488 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:27:13.538495 | orchestrator | 2026-01-05 01:27:13.538503 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-01-05 01:27:13.538511 | orchestrator | Monday 05 January 2026 01:27:05 +0000 (0:00:00.373) 0:00:22.301 ******** 2026-01-05 01:27:13.538518 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-01-05 01:27:13.538526 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-01-05 01:27:13.538535 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-01-05 01:27:13.538543 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-01-05 01:27:13.538551 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-01-05 01:27:13.538558 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-01-05 01:27:13.538567 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-01-05 01:27:13.538574 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-01-05 01:27:13.538582 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-01-05 01:27:13.538590 | orchestrator | 2026-01-05 01:27:13.538597 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-01-05 01:27:13.538605 | orchestrator | Monday 05 January 2026 01:27:06 +0000 (0:00:01.229) 0:00:23.531 ******** 2026-01-05 01:27:13.538630 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-05 01:27:13.538640 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-05 01:27:13.538647 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-05 01:27:13.538655 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:27:13.538663 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-05 01:27:13.538670 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-05 01:27:13.538678 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-05 01:27:13.538685 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:27:13.538693 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-05 01:27:13.538700 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-05 01:27:13.538708 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-05 01:27:13.538715 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:27:13.538723 | orchestrator | 2026-01-05 01:27:13.538731 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-01-05 01:27:13.538738 | orchestrator | Monday 05 January 2026 01:27:07 +0000 (0:00:00.606) 0:00:24.137 ******** 2026-01-05 01:27:13.538747 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:27:13.538774 | orchestrator | 2026-01-05 01:27:13.538789 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-05 01:27:13.538807 | orchestrator | Monday 05 January 2026 01:27:07 +0000 (0:00:00.547) 0:00:24.685 ******** 2026-01-05 01:27:13.538815 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:27:13.538823 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:27:13.538830 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:27:13.538838 | orchestrator | 2026-01-05 01:27:13.538845 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-05 01:27:13.538853 | orchestrator | Monday 05 January 2026 01:27:08 +0000 (0:00:00.528) 0:00:25.214 ******** 2026-01-05 01:27:13.538861 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:27:13.538868 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:27:13.538875 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:27:13.538883 | orchestrator | 2026-01-05 01:27:13.538892 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-05 01:27:13.538900 | orchestrator | Monday 05 January 2026 01:27:08 +0000 (0:00:00.342) 0:00:25.557 ******** 2026-01-05 01:27:13.538907 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:27:13.538914 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:27:13.538922 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:27:13.538928 | orchestrator | 2026-01-05 01:27:13.538935 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-05 01:27:13.538942 | orchestrator | Monday 05 January 2026 01:27:08 +0000 (0:00:00.381) 0:00:25.938 ******** 2026-01-05 01:27:13.538949 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:27:13.538956 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:27:13.538964 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:27:13.538970 | orchestrator | 2026-01-05 01:27:13.538977 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-05 01:27:13.538983 | orchestrator | Monday 05 January 2026 01:27:09 +0000 (0:00:00.426) 0:00:26.365 ******** 2026-01-05 01:27:13.538990 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 01:27:13.538998 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 01:27:13.539005 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 01:27:13.539012 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:27:13.539020 | orchestrator | 2026-01-05 01:27:13.539027 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-05 01:27:13.539035 | orchestrator | Monday 05 January 2026 01:27:10 +0000 (0:00:00.639) 0:00:27.005 ******** 2026-01-05 01:27:13.539042 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 01:27:13.539050 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 01:27:13.539057 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 01:27:13.539065 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:27:13.539072 | orchestrator | 2026-01-05 01:27:13.539080 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-05 01:27:13.539087 | orchestrator | Monday 05 January 2026 01:27:10 +0000 (0:00:00.653) 0:00:27.658 ******** 2026-01-05 01:27:13.539095 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-05 01:27:13.539102 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-05 01:27:13.539110 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-05 01:27:13.539118 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:27:13.539125 | orchestrator | 2026-01-05 01:27:13.539131 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-05 01:27:13.539138 | orchestrator | Monday 05 January 2026 01:27:11 +0000 (0:00:00.816) 0:00:28.475 ******** 2026-01-05 01:27:13.539145 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:27:13.539152 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:27:13.539159 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:27:13.539167 | orchestrator | 2026-01-05 01:27:13.539174 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-05 01:27:13.539180 | orchestrator | Monday 05 January 2026 01:27:11 +0000 (0:00:00.334) 0:00:28.809 ******** 2026-01-05 01:27:13.539196 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-05 01:27:13.539204 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-05 01:27:13.539211 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-05 01:27:13.539219 | orchestrator | 2026-01-05 01:27:13.539227 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-01-05 01:27:13.539232 | orchestrator | Monday 05 January 2026 01:27:12 +0000 (0:00:00.530) 0:00:29.340 ******** 2026-01-05 01:27:13.539237 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-05 01:27:13.539250 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-05 01:28:55.857519 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-05 01:28:55.858694 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-05 01:28:55.858748 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-05 01:28:55.858760 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-05 01:28:55.858770 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-05 01:28:55.858780 | orchestrator | 2026-01-05 01:28:55.858792 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-01-05 01:28:55.858802 | orchestrator | Monday 05 January 2026 01:27:13 +0000 (0:00:01.162) 0:00:30.502 ******** 2026-01-05 01:28:55.858812 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-05 01:28:55.858822 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-05 01:28:55.858832 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-05 01:28:55.858841 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-05 01:28:55.858867 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-05 01:28:55.858877 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-05 01:28:55.858887 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-05 01:28:55.858897 | orchestrator | 2026-01-05 01:28:55.858907 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-01-05 01:28:55.858917 | orchestrator | Monday 05 January 2026 01:27:15 +0000 (0:00:02.052) 0:00:32.554 ******** 2026-01-05 01:28:55.858927 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:28:55.858938 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:28:55.858948 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-01-05 01:28:55.858957 | orchestrator | 2026-01-05 01:28:55.858967 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-01-05 01:28:55.858977 | orchestrator | Monday 05 January 2026 01:27:15 +0000 (0:00:00.390) 0:00:32.944 ******** 2026-01-05 01:28:55.858989 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-05 01:28:55.859002 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-05 01:28:55.859012 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-05 01:28:55.859045 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-05 01:28:55.859055 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-05 01:28:55.859065 | orchestrator | 2026-01-05 01:28:55.859151 | orchestrator | TASK [generate keys] *********************************************************** 2026-01-05 01:28:55.859162 | orchestrator | Monday 05 January 2026 01:28:01 +0000 (0:00:45.100) 0:01:18.045 ******** 2026-01-05 01:28:55.859172 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 01:28:55.859181 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 01:28:55.859191 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 01:28:55.859200 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 01:28:55.859210 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 01:28:55.859220 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 01:28:55.859229 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-01-05 01:28:55.859238 | orchestrator | 2026-01-05 01:28:55.859248 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-01-05 01:28:55.859258 | orchestrator | Monday 05 January 2026 01:28:25 +0000 (0:00:24.249) 0:01:42.294 ******** 2026-01-05 01:28:55.859291 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 01:28:55.859301 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 01:28:55.859310 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 01:28:55.859320 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 01:28:55.859329 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 01:28:55.859338 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 01:28:55.859348 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-05 01:28:55.859357 | orchestrator | 2026-01-05 01:28:55.859366 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-01-05 01:28:55.859376 | orchestrator | Monday 05 January 2026 01:28:37 +0000 (0:00:12.050) 0:01:54.345 ******** 2026-01-05 01:28:55.859385 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 01:28:55.859394 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-05 01:28:55.859404 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-05 01:28:55.859420 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 01:28:55.859430 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-05 01:28:55.859440 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-05 01:28:55.859449 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 01:28:55.859458 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-05 01:28:55.859468 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-05 01:28:55.859477 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 01:28:55.859487 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-05 01:28:55.859505 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-05 01:28:55.859515 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 01:28:55.859524 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-05 01:28:55.859533 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-05 01:28:55.859544 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-05 01:28:55.859553 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-05 01:28:55.859563 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-05 01:28:55.859572 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-01-05 01:28:55.859582 | orchestrator | 2026-01-05 01:28:55.859591 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:28:55.859601 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-01-05 01:28:55.859613 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-01-05 01:28:55.859623 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-01-05 01:28:55.859633 | orchestrator | 2026-01-05 01:28:55.859643 | orchestrator | 2026-01-05 01:28:55.859652 | orchestrator | 2026-01-05 01:28:55.859662 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:28:55.859672 | orchestrator | Monday 05 January 2026 01:28:55 +0000 (0:00:18.449) 0:02:12.794 ******** 2026-01-05 01:28:55.859681 | orchestrator | =============================================================================== 2026-01-05 01:28:55.859691 | orchestrator | create openstack pool(s) ----------------------------------------------- 45.10s 2026-01-05 01:28:55.859700 | orchestrator | generate keys ---------------------------------------------------------- 24.25s 2026-01-05 01:28:55.859709 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.45s 2026-01-05 01:28:55.859719 | orchestrator | get keys from monitors ------------------------------------------------- 12.05s 2026-01-05 01:28:55.859728 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.30s 2026-01-05 01:28:55.859738 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.05s 2026-01-05 01:28:55.859747 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.88s 2026-01-05 01:28:55.859757 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 1.23s 2026-01-05 01:28:55.859766 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.16s 2026-01-05 01:28:55.859776 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.98s 2026-01-05 01:28:55.859785 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.92s 2026-01-05 01:28:55.859794 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.90s 2026-01-05 01:28:55.859804 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.88s 2026-01-05 01:28:55.859890 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6 ------ 0.82s 2026-01-05 01:28:56.435739 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.77s 2026-01-05 01:28:56.435842 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.69s 2026-01-05 01:28:56.435853 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.68s 2026-01-05 01:28:56.435858 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.67s 2026-01-05 01:28:56.435863 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.67s 2026-01-05 01:28:56.435890 | orchestrator | ceph-facts : Get current fsid ------------------------------------------- 0.66s 2026-01-05 01:28:58.816929 | orchestrator | 2026-01-05 01:28:58 | INFO  | Task d0c36346-0da4-47e4-a449-367c0db5348f (copy-ceph-keys) was prepared for execution. 2026-01-05 01:28:58.817035 | orchestrator | 2026-01-05 01:28:58 | INFO  | It takes a moment until task d0c36346-0da4-47e4-a449-367c0db5348f (copy-ceph-keys) has been started and output is visible here. 2026-01-05 01:29:37.588082 | orchestrator | 2026-01-05 01:29:37.588270 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-01-05 01:29:37.588302 | orchestrator | 2026-01-05 01:29:37.588324 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-01-05 01:29:37.588339 | orchestrator | Monday 05 January 2026 01:29:03 +0000 (0:00:00.165) 0:00:00.165 ******** 2026-01-05 01:29:37.588351 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-01-05 01:29:37.588364 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-05 01:29:37.588375 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-05 01:29:37.588386 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-01-05 01:29:37.588397 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-05 01:29:37.588408 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-01-05 01:29:37.588424 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-01-05 01:29:37.588443 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-01-05 01:29:37.588467 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-01-05 01:29:37.588492 | orchestrator | 2026-01-05 01:29:37.588509 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-01-05 01:29:37.588526 | orchestrator | Monday 05 January 2026 01:29:07 +0000 (0:00:04.843) 0:00:05.008 ******** 2026-01-05 01:29:37.588544 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-01-05 01:29:37.588562 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-05 01:29:37.588579 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-05 01:29:37.588596 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-01-05 01:29:37.588612 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-05 01:29:37.588629 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-01-05 01:29:37.588647 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-01-05 01:29:37.588665 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-01-05 01:29:37.588683 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-01-05 01:29:37.588701 | orchestrator | 2026-01-05 01:29:37.588721 | orchestrator | TASK [Create share directory] ************************************************** 2026-01-05 01:29:37.588739 | orchestrator | Monday 05 January 2026 01:29:12 +0000 (0:00:04.393) 0:00:09.402 ******** 2026-01-05 01:29:37.588778 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-05 01:29:37.588797 | orchestrator | 2026-01-05 01:29:37.588817 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-01-05 01:29:37.588836 | orchestrator | Monday 05 January 2026 01:29:13 +0000 (0:00:01.004) 0:00:10.406 ******** 2026-01-05 01:29:37.588888 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-01-05 01:29:37.588910 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-05 01:29:37.588931 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-05 01:29:37.588951 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-01-05 01:29:37.588970 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-05 01:29:37.588984 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-01-05 01:29:37.588996 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-01-05 01:29:37.589009 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-01-05 01:29:37.589020 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-01-05 01:29:37.589030 | orchestrator | 2026-01-05 01:29:37.589041 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-01-05 01:29:37.589052 | orchestrator | Monday 05 January 2026 01:29:27 +0000 (0:00:13.701) 0:00:24.108 ******** 2026-01-05 01:29:37.589062 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-01-05 01:29:37.589073 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-01-05 01:29:37.589086 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-01-05 01:29:37.589097 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-01-05 01:29:37.589131 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-01-05 01:29:37.589152 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-01-05 01:29:37.589163 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-01-05 01:29:37.589174 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-01-05 01:29:37.589210 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-01-05 01:29:37.589221 | orchestrator | 2026-01-05 01:29:37.589232 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-01-05 01:29:37.589243 | orchestrator | Monday 05 January 2026 01:29:30 +0000 (0:00:03.100) 0:00:27.209 ******** 2026-01-05 01:29:37.589255 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-01-05 01:29:37.589266 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-05 01:29:37.589277 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-05 01:29:37.589288 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-01-05 01:29:37.589298 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-05 01:29:37.589309 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-01-05 01:29:37.589320 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-01-05 01:29:37.589330 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-01-05 01:29:37.589341 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-01-05 01:29:37.589352 | orchestrator | 2026-01-05 01:29:37.589363 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:29:37.589374 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:29:37.589395 | orchestrator | 2026-01-05 01:29:37.589406 | orchestrator | 2026-01-05 01:29:37.589416 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:29:37.589427 | orchestrator | Monday 05 January 2026 01:29:37 +0000 (0:00:07.128) 0:00:34.337 ******** 2026-01-05 01:29:37.589438 | orchestrator | =============================================================================== 2026-01-05 01:29:37.589449 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.70s 2026-01-05 01:29:37.589460 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.13s 2026-01-05 01:29:37.589471 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.84s 2026-01-05 01:29:37.589481 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.39s 2026-01-05 01:29:37.589492 | orchestrator | Check if target directories exist --------------------------------------- 3.10s 2026-01-05 01:29:37.589503 | orchestrator | Create share directory -------------------------------------------------- 1.00s 2026-01-05 01:29:50.065745 | orchestrator | 2026-01-05 01:29:50 | INFO  | Task 4815cd77-6e6d-4f98-bf7b-c20195f3b5fe (cephclient) was prepared for execution. 2026-01-05 01:29:50.065851 | orchestrator | 2026-01-05 01:29:50 | INFO  | It takes a moment until task 4815cd77-6e6d-4f98-bf7b-c20195f3b5fe (cephclient) has been started and output is visible here. 2026-01-05 01:30:52.461016 | orchestrator | 2026-01-05 01:30:52.461141 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-01-05 01:30:52.461161 | orchestrator | 2026-01-05 01:30:52.461176 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-01-05 01:30:52.461191 | orchestrator | Monday 05 January 2026 01:29:54 +0000 (0:00:00.241) 0:00:00.241 ******** 2026-01-05 01:30:52.461204 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-01-05 01:30:52.461219 | orchestrator | 2026-01-05 01:30:52.461231 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-01-05 01:30:52.461244 | orchestrator | Monday 05 January 2026 01:29:54 +0000 (0:00:00.294) 0:00:00.535 ******** 2026-01-05 01:30:52.461257 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-01-05 01:30:52.461269 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-01-05 01:30:52.461283 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-01-05 01:30:52.461296 | orchestrator | 2026-01-05 01:30:52.461309 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-01-05 01:30:52.461321 | orchestrator | Monday 05 January 2026 01:29:56 +0000 (0:00:01.297) 0:00:01.832 ******** 2026-01-05 01:30:52.461334 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-01-05 01:30:52.461346 | orchestrator | 2026-01-05 01:30:52.461359 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-01-05 01:30:52.461454 | orchestrator | Monday 05 January 2026 01:29:57 +0000 (0:00:01.488) 0:00:03.321 ******** 2026-01-05 01:30:52.461470 | orchestrator | changed: [testbed-manager] 2026-01-05 01:30:52.461482 | orchestrator | 2026-01-05 01:30:52.461495 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-01-05 01:30:52.461507 | orchestrator | Monday 05 January 2026 01:29:58 +0000 (0:00:00.964) 0:00:04.285 ******** 2026-01-05 01:30:52.461520 | orchestrator | changed: [testbed-manager] 2026-01-05 01:30:52.461532 | orchestrator | 2026-01-05 01:30:52.461544 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-01-05 01:30:52.461576 | orchestrator | Monday 05 January 2026 01:29:59 +0000 (0:00:00.958) 0:00:05.244 ******** 2026-01-05 01:30:52.461590 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-01-05 01:30:52.461603 | orchestrator | ok: [testbed-manager] 2026-01-05 01:30:52.461616 | orchestrator | 2026-01-05 01:30:52.461628 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-01-05 01:30:52.461669 | orchestrator | Monday 05 January 2026 01:30:42 +0000 (0:00:42.647) 0:00:47.892 ******** 2026-01-05 01:30:52.461683 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-01-05 01:30:52.461696 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-01-05 01:30:52.461709 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-01-05 01:30:52.461721 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-01-05 01:30:52.461733 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-01-05 01:30:52.461745 | orchestrator | 2026-01-05 01:30:52.461758 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-01-05 01:30:52.461771 | orchestrator | Monday 05 January 2026 01:30:46 +0000 (0:00:04.258) 0:00:52.150 ******** 2026-01-05 01:30:52.461784 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-01-05 01:30:52.461796 | orchestrator | 2026-01-05 01:30:52.461808 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-01-05 01:30:52.461821 | orchestrator | Monday 05 January 2026 01:30:46 +0000 (0:00:00.493) 0:00:52.644 ******** 2026-01-05 01:30:52.461833 | orchestrator | skipping: [testbed-manager] 2026-01-05 01:30:52.461845 | orchestrator | 2026-01-05 01:30:52.461857 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-01-05 01:30:52.461869 | orchestrator | Monday 05 January 2026 01:30:47 +0000 (0:00:00.157) 0:00:52.802 ******** 2026-01-05 01:30:52.461882 | orchestrator | skipping: [testbed-manager] 2026-01-05 01:30:52.461895 | orchestrator | 2026-01-05 01:30:52.461907 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-01-05 01:30:52.461919 | orchestrator | Monday 05 January 2026 01:30:47 +0000 (0:00:00.524) 0:00:53.326 ******** 2026-01-05 01:30:52.461931 | orchestrator | changed: [testbed-manager] 2026-01-05 01:30:52.461943 | orchestrator | 2026-01-05 01:30:52.461956 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-01-05 01:30:52.461968 | orchestrator | Monday 05 January 2026 01:30:49 +0000 (0:00:01.507) 0:00:54.833 ******** 2026-01-05 01:30:52.461980 | orchestrator | changed: [testbed-manager] 2026-01-05 01:30:52.461992 | orchestrator | 2026-01-05 01:30:52.462004 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-01-05 01:30:52.462073 | orchestrator | Monday 05 January 2026 01:30:49 +0000 (0:00:00.746) 0:00:55.580 ******** 2026-01-05 01:30:52.462088 | orchestrator | changed: [testbed-manager] 2026-01-05 01:30:52.462100 | orchestrator | 2026-01-05 01:30:52.462112 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-01-05 01:30:52.462125 | orchestrator | Monday 05 January 2026 01:30:50 +0000 (0:00:00.618) 0:00:56.199 ******** 2026-01-05 01:30:52.462138 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-01-05 01:30:52.462150 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-01-05 01:30:52.462163 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-01-05 01:30:52.462175 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-01-05 01:30:52.462187 | orchestrator | 2026-01-05 01:30:52.462199 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:30:52.462212 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 01:30:52.462225 | orchestrator | 2026-01-05 01:30:52.462237 | orchestrator | 2026-01-05 01:30:52.462272 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:30:52.462285 | orchestrator | Monday 05 January 2026 01:30:52 +0000 (0:00:01.560) 0:00:57.760 ******** 2026-01-05 01:30:52.462297 | orchestrator | =============================================================================== 2026-01-05 01:30:52.462309 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 42.65s 2026-01-05 01:30:52.462322 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.26s 2026-01-05 01:30:52.462335 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.56s 2026-01-05 01:30:52.462357 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.51s 2026-01-05 01:30:52.462389 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.49s 2026-01-05 01:30:52.462403 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.30s 2026-01-05 01:30:52.462416 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.96s 2026-01-05 01:30:52.462429 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.96s 2026-01-05 01:30:52.462441 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.75s 2026-01-05 01:30:52.462454 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.62s 2026-01-05 01:30:52.462466 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.52s 2026-01-05 01:30:52.462479 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.49s 2026-01-05 01:30:52.462491 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.29s 2026-01-05 01:30:52.462503 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.16s 2026-01-05 01:30:56.224206 | orchestrator | 2026-01-05 01:30:56 | INFO  | Task af6bea42-7290-4725-af52-2f06c1fc43ea (ceph-bootstrap-dashboard) was prepared for execution. 2026-01-05 01:30:56.224361 | orchestrator | 2026-01-05 01:30:56 | INFO  | It takes a moment until task af6bea42-7290-4725-af52-2f06c1fc43ea (ceph-bootstrap-dashboard) has been started and output is visible here. 2026-01-05 01:32:29.623644 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-05 01:32:29.623755 | orchestrator | 2.16.14 2026-01-05 01:32:29.623767 | orchestrator | 2026-01-05 01:32:29.623776 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-01-05 01:32:29.623785 | orchestrator | 2026-01-05 01:32:29.623792 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-01-05 01:32:29.623800 | orchestrator | Monday 05 January 2026 01:31:00 +0000 (0:00:00.275) 0:00:00.276 ******** 2026-01-05 01:32:29.623807 | orchestrator | changed: [testbed-manager] 2026-01-05 01:32:29.623816 | orchestrator | 2026-01-05 01:32:29.623823 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-01-05 01:32:29.623829 | orchestrator | Monday 05 January 2026 01:31:02 +0000 (0:00:01.513) 0:00:01.789 ******** 2026-01-05 01:32:29.623837 | orchestrator | changed: [testbed-manager] 2026-01-05 01:32:29.623844 | orchestrator | 2026-01-05 01:32:29.623851 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-01-05 01:32:29.623858 | orchestrator | Monday 05 January 2026 01:31:03 +0000 (0:00:01.116) 0:00:02.906 ******** 2026-01-05 01:32:29.623865 | orchestrator | changed: [testbed-manager] 2026-01-05 01:32:29.623871 | orchestrator | 2026-01-05 01:32:29.623878 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-01-05 01:32:29.623885 | orchestrator | Monday 05 January 2026 01:31:04 +0000 (0:00:01.054) 0:00:03.961 ******** 2026-01-05 01:32:29.623892 | orchestrator | changed: [testbed-manager] 2026-01-05 01:32:29.623899 | orchestrator | 2026-01-05 01:32:29.623906 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-01-05 01:32:29.623912 | orchestrator | Monday 05 January 2026 01:31:05 +0000 (0:00:01.207) 0:00:05.168 ******** 2026-01-05 01:32:29.623919 | orchestrator | changed: [testbed-manager] 2026-01-05 01:32:29.623925 | orchestrator | 2026-01-05 01:32:29.623932 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-01-05 01:32:29.623938 | orchestrator | Monday 05 January 2026 01:31:06 +0000 (0:00:01.055) 0:00:06.224 ******** 2026-01-05 01:32:29.623945 | orchestrator | changed: [testbed-manager] 2026-01-05 01:32:29.623952 | orchestrator | 2026-01-05 01:32:29.623959 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-01-05 01:32:29.623966 | orchestrator | Monday 05 January 2026 01:31:07 +0000 (0:00:01.071) 0:00:07.295 ******** 2026-01-05 01:32:29.623996 | orchestrator | changed: [testbed-manager] 2026-01-05 01:32:29.624004 | orchestrator | 2026-01-05 01:32:29.624011 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-01-05 01:32:29.624018 | orchestrator | Monday 05 January 2026 01:31:09 +0000 (0:00:02.131) 0:00:09.427 ******** 2026-01-05 01:32:29.624025 | orchestrator | changed: [testbed-manager] 2026-01-05 01:32:29.624032 | orchestrator | 2026-01-05 01:32:29.624039 | orchestrator | TASK [Create admin user] ******************************************************* 2026-01-05 01:32:29.624046 | orchestrator | Monday 05 January 2026 01:31:10 +0000 (0:00:01.189) 0:00:10.617 ******** 2026-01-05 01:32:29.624053 | orchestrator | changed: [testbed-manager] 2026-01-05 01:32:29.624060 | orchestrator | 2026-01-05 01:32:29.624068 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-01-05 01:32:29.624074 | orchestrator | Monday 05 January 2026 01:32:04 +0000 (0:00:53.369) 0:01:03.986 ******** 2026-01-05 01:32:29.624080 | orchestrator | skipping: [testbed-manager] 2026-01-05 01:32:29.624085 | orchestrator | 2026-01-05 01:32:29.624091 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-05 01:32:29.624097 | orchestrator | 2026-01-05 01:32:29.624103 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-05 01:32:29.624109 | orchestrator | Monday 05 January 2026 01:32:04 +0000 (0:00:00.211) 0:01:04.197 ******** 2026-01-05 01:32:29.624116 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:32:29.624123 | orchestrator | 2026-01-05 01:32:29.624130 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-05 01:32:29.624137 | orchestrator | 2026-01-05 01:32:29.624145 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-05 01:32:29.624153 | orchestrator | Monday 05 January 2026 01:32:06 +0000 (0:00:01.890) 0:01:06.088 ******** 2026-01-05 01:32:29.624161 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:32:29.624168 | orchestrator | 2026-01-05 01:32:29.624176 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-05 01:32:29.624184 | orchestrator | 2026-01-05 01:32:29.624191 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-05 01:32:29.624199 | orchestrator | Monday 05 January 2026 01:32:17 +0000 (0:00:11.364) 0:01:17.452 ******** 2026-01-05 01:32:29.624206 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:32:29.624214 | orchestrator | 2026-01-05 01:32:29.624222 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:32:29.624231 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 01:32:29.624242 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:32:29.624251 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:32:29.624260 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:32:29.624267 | orchestrator | 2026-01-05 01:32:29.624276 | orchestrator | 2026-01-05 01:32:29.624284 | orchestrator | 2026-01-05 01:32:29.624291 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:32:29.624299 | orchestrator | Monday 05 January 2026 01:32:29 +0000 (0:00:11.390) 0:01:28.843 ******** 2026-01-05 01:32:29.624306 | orchestrator | =============================================================================== 2026-01-05 01:32:29.624329 | orchestrator | Create admin user ------------------------------------------------------ 53.37s 2026-01-05 01:32:29.624356 | orchestrator | Restart ceph manager service ------------------------------------------- 24.65s 2026-01-05 01:32:29.624364 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.13s 2026-01-05 01:32:29.624371 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.51s 2026-01-05 01:32:29.624385 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.21s 2026-01-05 01:32:29.624392 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.19s 2026-01-05 01:32:29.624399 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.12s 2026-01-05 01:32:29.624407 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.07s 2026-01-05 01:32:29.624414 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.06s 2026-01-05 01:32:29.624422 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.05s 2026-01-05 01:32:29.624433 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.21s 2026-01-05 01:32:29.945816 | orchestrator | + sh -c /opt/configuration/scripts/deploy/300-openstack.sh 2026-01-05 01:32:32.048388 | orchestrator | 2026-01-05 01:32:32 | INFO  | Task e5c56852-61d2-486b-a7ed-510fa78e4fc8 (keystone) was prepared for execution. 2026-01-05 01:32:32.048471 | orchestrator | 2026-01-05 01:32:32 | INFO  | It takes a moment until task e5c56852-61d2-486b-a7ed-510fa78e4fc8 (keystone) has been started and output is visible here. 2026-01-05 01:32:39.578224 | orchestrator | 2026-01-05 01:32:39.578313 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 01:32:39.578323 | orchestrator | 2026-01-05 01:32:39.578330 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 01:32:39.578337 | orchestrator | Monday 05 January 2026 01:32:36 +0000 (0:00:00.288) 0:00:00.288 ******** 2026-01-05 01:32:39.578345 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:32:39.578352 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:32:39.578358 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:32:39.578365 | orchestrator | 2026-01-05 01:32:39.578371 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 01:32:39.578378 | orchestrator | Monday 05 January 2026 01:32:36 +0000 (0:00:00.332) 0:00:00.621 ******** 2026-01-05 01:32:39.578384 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-01-05 01:32:39.578391 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-01-05 01:32:39.578397 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-01-05 01:32:39.578403 | orchestrator | 2026-01-05 01:32:39.578409 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-01-05 01:32:39.578415 | orchestrator | 2026-01-05 01:32:39.578421 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-05 01:32:39.578428 | orchestrator | Monday 05 January 2026 01:32:37 +0000 (0:00:00.448) 0:00:01.070 ******** 2026-01-05 01:32:39.578434 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:32:39.578442 | orchestrator | 2026-01-05 01:32:39.578448 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-01-05 01:32:39.578454 | orchestrator | Monday 05 January 2026 01:32:37 +0000 (0:00:00.595) 0:00:01.665 ******** 2026-01-05 01:32:39.578465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 01:32:39.578542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 01:32:39.578569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 01:32:39.578578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 01:32:39.578587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 01:32:39.578594 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 01:32:39.578650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 01:32:39.578658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 01:32:39.578665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 01:32:39.578671 | orchestrator | 2026-01-05 01:32:39.578678 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-01-05 01:32:39.578689 | orchestrator | Monday 05 January 2026 01:32:39 +0000 (0:00:01.833) 0:00:03.498 ******** 2026-01-05 01:32:45.752436 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:32:45.752589 | orchestrator | 2026-01-05 01:32:45.752613 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-01-05 01:32:45.752674 | orchestrator | Monday 05 January 2026 01:32:39 +0000 (0:00:00.345) 0:00:03.843 ******** 2026-01-05 01:32:45.752683 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:32:45.752691 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:32:45.752699 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:32:45.752707 | orchestrator | 2026-01-05 01:32:45.752716 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-01-05 01:32:45.752724 | orchestrator | Monday 05 January 2026 01:32:40 +0000 (0:00:00.321) 0:00:04.165 ******** 2026-01-05 01:32:45.752732 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 01:32:45.752740 | orchestrator | 2026-01-05 01:32:45.752749 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-05 01:32:45.752762 | orchestrator | Monday 05 January 2026 01:32:41 +0000 (0:00:00.813) 0:00:04.978 ******** 2026-01-05 01:32:45.752777 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:32:45.752791 | orchestrator | 2026-01-05 01:32:45.752805 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-01-05 01:32:45.752817 | orchestrator | Monday 05 January 2026 01:32:41 +0000 (0:00:00.585) 0:00:05.564 ******** 2026-01-05 01:32:45.752835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 01:32:45.752970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 01:32:45.752989 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 01:32:45.753020 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 01:32:45.753033 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 01:32:45.753050 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 01:32:45.753060 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 01:32:45.753074 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 01:32:45.753084 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 01:32:45.753093 | orchestrator | 2026-01-05 01:32:45.753103 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-01-05 01:32:45.753113 | orchestrator | Monday 05 January 2026 01:32:45 +0000 (0:00:03.539) 0:00:09.103 ******** 2026-01-05 01:32:45.753131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-05 01:32:46.540061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 01:32:46.540176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 01:32:46.540187 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:32:46.540211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-05 01:32:46.540221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 01:32:46.540228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 01:32:46.540235 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:32:46.540258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-05 01:32:46.540272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 01:32:46.540280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 01:32:46.540286 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:32:46.540294 | orchestrator | 2026-01-05 01:32:46.540301 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-01-05 01:32:46.540314 | orchestrator | Monday 05 January 2026 01:32:45 +0000 (0:00:00.579) 0:00:09.683 ******** 2026-01-05 01:32:46.540321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-05 01:32:46.540329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 01:32:46.540343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 01:32:50.113260 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:32:50.113345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-05 01:32:50.113358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 01:32:50.113384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 01:32:50.113394 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:32:50.113403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-05 01:32:50.113435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 01:32:50.113461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 01:32:50.113470 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:32:50.113479 | orchestrator | 2026-01-05 01:32:50.113489 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-01-05 01:32:50.113500 | orchestrator | Monday 05 January 2026 01:32:46 +0000 (0:00:00.785) 0:00:10.469 ******** 2026-01-05 01:32:50.113510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 01:32:50.113520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 01:32:50.113526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 01:32:50.113544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 01:32:55.071325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 01:32:55.071426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 01:32:55.071458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 01:32:55.071470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 01:32:55.071504 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 01:32:55.071516 | orchestrator | 2026-01-05 01:32:55.071524 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-01-05 01:32:55.071531 | orchestrator | Monday 05 January 2026 01:32:50 +0000 (0:00:03.569) 0:00:14.038 ******** 2026-01-05 01:32:55.071554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 01:32:55.071562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 01:32:55.071573 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 01:32:55.071580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 01:32:55.071592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 01:32:55.071604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 01:32:58.601301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 01:32:58.601402 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 01:32:58.601428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 01:32:58.601437 | orchestrator | 2026-01-05 01:32:58.601446 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-01-05 01:32:58.601455 | orchestrator | Monday 05 January 2026 01:32:55 +0000 (0:00:04.949) 0:00:18.988 ******** 2026-01-05 01:32:58.601482 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:32:58.601490 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:32:58.601498 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:32:58.601505 | orchestrator | 2026-01-05 01:32:58.601512 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-01-05 01:32:58.601519 | orchestrator | Monday 05 January 2026 01:32:56 +0000 (0:00:01.416) 0:00:20.405 ******** 2026-01-05 01:32:58.601526 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:32:58.601532 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:32:58.601539 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:32:58.601546 | orchestrator | 2026-01-05 01:32:58.601553 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-01-05 01:32:58.601562 | orchestrator | Monday 05 January 2026 01:32:57 +0000 (0:00:00.611) 0:00:21.016 ******** 2026-01-05 01:32:58.601569 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:32:58.601576 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:32:58.601583 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:32:58.601589 | orchestrator | 2026-01-05 01:32:58.601596 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-01-05 01:32:58.601603 | orchestrator | Monday 05 January 2026 01:32:57 +0000 (0:00:00.522) 0:00:21.538 ******** 2026-01-05 01:32:58.601609 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:32:58.601616 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:32:58.601623 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:32:58.601630 | orchestrator | 2026-01-05 01:32:58.601637 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-01-05 01:32:58.601693 | orchestrator | Monday 05 January 2026 01:32:57 +0000 (0:00:00.395) 0:00:21.934 ******** 2026-01-05 01:32:58.601720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-05 01:32:58.601729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 01:32:58.601743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 01:32:58.601757 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:32:58.601764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-05 01:32:58.601771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 01:32:58.601779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 01:32:58.601786 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:32:58.601799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-05 01:33:18.001643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 01:33:18.001910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 01:33:18.001941 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:33:18.001961 | orchestrator | 2026-01-05 01:33:18.001979 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-05 01:33:18.001998 | orchestrator | Monday 05 January 2026 01:32:58 +0000 (0:00:00.590) 0:00:22.525 ******** 2026-01-05 01:33:18.002014 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:33:18.002130 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:33:18.002147 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:33:18.002178 | orchestrator | 2026-01-05 01:33:18.002196 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-01-05 01:33:18.002213 | orchestrator | Monday 05 January 2026 01:32:58 +0000 (0:00:00.303) 0:00:22.828 ******** 2026-01-05 01:33:18.002230 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-05 01:33:18.002244 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-05 01:33:18.002255 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-05 01:33:18.002267 | orchestrator | 2026-01-05 01:33:18.002279 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-01-05 01:33:18.002291 | orchestrator | Monday 05 January 2026 01:33:00 +0000 (0:00:01.826) 0:00:24.655 ******** 2026-01-05 01:33:18.002304 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 01:33:18.002315 | orchestrator | 2026-01-05 01:33:18.002325 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-01-05 01:33:18.002336 | orchestrator | Monday 05 January 2026 01:33:01 +0000 (0:00:00.960) 0:00:25.616 ******** 2026-01-05 01:33:18.002345 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:33:18.002355 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:33:18.002364 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:33:18.002374 | orchestrator | 2026-01-05 01:33:18.002383 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-01-05 01:33:18.002393 | orchestrator | Monday 05 January 2026 01:33:02 +0000 (0:00:00.655) 0:00:26.272 ******** 2026-01-05 01:33:18.002403 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 01:33:18.002412 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-05 01:33:18.002422 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-05 01:33:18.002431 | orchestrator | 2026-01-05 01:33:18.002441 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-01-05 01:33:18.002451 | orchestrator | Monday 05 January 2026 01:33:03 +0000 (0:00:01.164) 0:00:27.436 ******** 2026-01-05 01:33:18.002461 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:33:18.002472 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:33:18.002481 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:33:18.002491 | orchestrator | 2026-01-05 01:33:18.002501 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-01-05 01:33:18.002510 | orchestrator | Monday 05 January 2026 01:33:04 +0000 (0:00:00.549) 0:00:27.985 ******** 2026-01-05 01:33:18.002520 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-05 01:33:18.002543 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-05 01:33:18.002553 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-05 01:33:18.002562 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-05 01:33:18.002572 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-05 01:33:18.002581 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-05 01:33:18.002591 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-05 01:33:18.002601 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-05 01:33:18.002633 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-05 01:33:18.002643 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-05 01:33:18.002653 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-05 01:33:18.002662 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-05 01:33:18.002672 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-05 01:33:18.002710 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-05 01:33:18.002730 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-05 01:33:18.002740 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-05 01:33:18.002750 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-05 01:33:18.002760 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-05 01:33:18.002769 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-05 01:33:18.002779 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-05 01:33:18.002789 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-05 01:33:18.002798 | orchestrator | 2026-01-05 01:33:18.002808 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-01-05 01:33:18.002817 | orchestrator | Monday 05 January 2026 01:33:12 +0000 (0:00:08.789) 0:00:36.774 ******** 2026-01-05 01:33:18.002827 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-05 01:33:18.002836 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-05 01:33:18.002846 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-05 01:33:18.002855 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-05 01:33:18.002865 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-05 01:33:18.002874 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-05 01:33:18.002884 | orchestrator | 2026-01-05 01:33:18.002894 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-01-05 01:33:18.002903 | orchestrator | Monday 05 January 2026 01:33:15 +0000 (0:00:02.695) 0:00:39.470 ******** 2026-01-05 01:33:18.002917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 01:33:18.002945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 01:34:50.733665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 01:34:50.733772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 01:34:50.733783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 01:34:50.733811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 01:34:50.733818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 01:34:50.733841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 01:34:50.733855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 01:34:50.733862 | orchestrator | 2026-01-05 01:34:50.733870 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-05 01:34:50.733937 | orchestrator | Monday 05 January 2026 01:33:17 +0000 (0:00:02.454) 0:00:41.924 ******** 2026-01-05 01:34:50.733946 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:34:50.733953 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:34:50.733959 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:34:50.733966 | orchestrator | 2026-01-05 01:34:50.733972 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-01-05 01:34:50.733978 | orchestrator | Monday 05 January 2026 01:33:18 +0000 (0:00:00.524) 0:00:42.449 ******** 2026-01-05 01:34:50.733984 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:34:50.733990 | orchestrator | 2026-01-05 01:34:50.733996 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-01-05 01:34:50.734004 | orchestrator | Monday 05 January 2026 01:33:20 +0000 (0:00:02.458) 0:00:44.907 ******** 2026-01-05 01:34:50.734010 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:34:50.734067 | orchestrator | 2026-01-05 01:34:50.734074 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-01-05 01:34:50.734080 | orchestrator | Monday 05 January 2026 01:33:23 +0000 (0:00:02.432) 0:00:47.340 ******** 2026-01-05 01:34:50.734097 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:34:50.734103 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:34:50.734109 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:34:50.734117 | orchestrator | 2026-01-05 01:34:50.734121 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-01-05 01:34:50.734125 | orchestrator | Monday 05 January 2026 01:33:24 +0000 (0:00:00.872) 0:00:48.212 ******** 2026-01-05 01:34:50.734129 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:34:50.734133 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:34:50.734138 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:34:50.734142 | orchestrator | 2026-01-05 01:34:50.734147 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-01-05 01:34:50.734152 | orchestrator | Monday 05 January 2026 01:33:24 +0000 (0:00:00.361) 0:00:48.574 ******** 2026-01-05 01:34:50.734157 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:34:50.734162 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:34:50.734168 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:34:50.734174 | orchestrator | 2026-01-05 01:34:50.734180 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-01-05 01:34:50.734187 | orchestrator | Monday 05 January 2026 01:33:24 +0000 (0:00:00.354) 0:00:48.929 ******** 2026-01-05 01:34:50.734193 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:34:50.734199 | orchestrator | 2026-01-05 01:34:50.734206 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-01-05 01:34:50.734212 | orchestrator | Monday 05 January 2026 01:33:41 +0000 (0:00:16.091) 0:01:05.020 ******** 2026-01-05 01:34:50.734218 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:34:50.734225 | orchestrator | 2026-01-05 01:34:50.734231 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-01-05 01:34:50.734237 | orchestrator | Monday 05 January 2026 01:33:52 +0000 (0:00:11.584) 0:01:16.604 ******** 2026-01-05 01:34:50.734244 | orchestrator | 2026-01-05 01:34:50.734250 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-01-05 01:34:50.734256 | orchestrator | Monday 05 January 2026 01:33:52 +0000 (0:00:00.071) 0:01:16.676 ******** 2026-01-05 01:34:50.734263 | orchestrator | 2026-01-05 01:34:50.734269 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-01-05 01:34:50.734276 | orchestrator | Monday 05 January 2026 01:33:52 +0000 (0:00:00.071) 0:01:16.747 ******** 2026-01-05 01:34:50.734282 | orchestrator | 2026-01-05 01:34:50.734288 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-01-05 01:34:50.734295 | orchestrator | Monday 05 January 2026 01:33:52 +0000 (0:00:00.086) 0:01:16.833 ******** 2026-01-05 01:34:50.734301 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:34:50.734308 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:34:50.734314 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:34:50.734321 | orchestrator | 2026-01-05 01:34:50.734328 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-01-05 01:34:50.734334 | orchestrator | Monday 05 January 2026 01:34:37 +0000 (0:00:44.557) 0:02:01.391 ******** 2026-01-05 01:34:50.734341 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:34:50.734347 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:34:50.734353 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:34:50.734359 | orchestrator | 2026-01-05 01:34:50.734365 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-01-05 01:34:50.734372 | orchestrator | Monday 05 January 2026 01:34:42 +0000 (0:00:04.842) 0:02:06.233 ******** 2026-01-05 01:34:50.734378 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:34:50.734384 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:34:50.734394 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:34:50.734403 | orchestrator | 2026-01-05 01:34:50.734411 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-05 01:34:50.734421 | orchestrator | Monday 05 January 2026 01:34:50 +0000 (0:00:07.893) 0:02:14.127 ******** 2026-01-05 01:34:50.734452 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:35:47.148398 | orchestrator | 2026-01-05 01:35:47.148508 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-01-05 01:35:47.148520 | orchestrator | Monday 05 January 2026 01:34:50 +0000 (0:00:00.528) 0:02:14.656 ******** 2026-01-05 01:35:47.148528 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:35:47.148535 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:35:47.148543 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:35:47.148549 | orchestrator | 2026-01-05 01:35:47.148556 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-01-05 01:35:47.148562 | orchestrator | Monday 05 January 2026 01:34:51 +0000 (0:00:00.826) 0:02:15.482 ******** 2026-01-05 01:35:47.148569 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:35:47.148578 | orchestrator | 2026-01-05 01:35:47.148600 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-01-05 01:35:47.148608 | orchestrator | Monday 05 January 2026 01:34:53 +0000 (0:00:02.245) 0:02:17.728 ******** 2026-01-05 01:35:47.148619 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-01-05 01:35:47.148626 | orchestrator | 2026-01-05 01:35:47.148633 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-01-05 01:35:47.148640 | orchestrator | Monday 05 January 2026 01:35:06 +0000 (0:00:13.025) 0:02:30.753 ******** 2026-01-05 01:35:47.148645 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-01-05 01:35:47.148653 | orchestrator | 2026-01-05 01:35:47.148659 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-01-05 01:35:47.148667 | orchestrator | Monday 05 January 2026 01:35:34 +0000 (0:00:27.696) 0:02:58.450 ******** 2026-01-05 01:35:47.148676 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-01-05 01:35:47.148685 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-01-05 01:35:47.148692 | orchestrator | 2026-01-05 01:35:47.148698 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-01-05 01:35:47.148705 | orchestrator | Monday 05 January 2026 01:35:41 +0000 (0:00:07.282) 0:03:05.732 ******** 2026-01-05 01:35:47.148714 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:35:47.148720 | orchestrator | 2026-01-05 01:35:47.148726 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-01-05 01:35:47.148734 | orchestrator | Monday 05 January 2026 01:35:41 +0000 (0:00:00.143) 0:03:05.876 ******** 2026-01-05 01:35:47.148742 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:35:47.148750 | orchestrator | 2026-01-05 01:35:47.148756 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-01-05 01:35:47.148763 | orchestrator | Monday 05 January 2026 01:35:42 +0000 (0:00:00.188) 0:03:06.064 ******** 2026-01-05 01:35:47.148769 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:35:47.148775 | orchestrator | 2026-01-05 01:35:47.148781 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-01-05 01:35:47.148788 | orchestrator | Monday 05 January 2026 01:35:42 +0000 (0:00:00.130) 0:03:06.195 ******** 2026-01-05 01:35:47.148794 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:35:47.148800 | orchestrator | 2026-01-05 01:35:47.148807 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-01-05 01:35:47.148813 | orchestrator | Monday 05 January 2026 01:35:42 +0000 (0:00:00.362) 0:03:06.558 ******** 2026-01-05 01:35:47.148820 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:35:47.148826 | orchestrator | 2026-01-05 01:35:47.148832 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-05 01:35:47.148839 | orchestrator | Monday 05 January 2026 01:35:46 +0000 (0:00:03.602) 0:03:10.161 ******** 2026-01-05 01:35:47.148845 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:35:47.148851 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:35:47.148879 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:35:47.148884 | orchestrator | 2026-01-05 01:35:47.148890 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:35:47.148898 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-05 01:35:47.148910 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-05 01:35:47.148916 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-05 01:35:47.148922 | orchestrator | 2026-01-05 01:35:47.148928 | orchestrator | 2026-01-05 01:35:47.148933 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:35:47.148940 | orchestrator | Monday 05 January 2026 01:35:46 +0000 (0:00:00.495) 0:03:10.657 ******** 2026-01-05 01:35:47.148946 | orchestrator | =============================================================================== 2026-01-05 01:35:47.148952 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 44.56s 2026-01-05 01:35:47.148959 | orchestrator | service-ks-register : keystone | Creating services --------------------- 27.70s 2026-01-05 01:35:47.148966 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 16.09s 2026-01-05 01:35:47.148972 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 13.03s 2026-01-05 01:35:47.149004 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 11.58s 2026-01-05 01:35:47.149009 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.79s 2026-01-05 01:35:47.149017 | orchestrator | keystone : Restart keystone container ----------------------------------- 7.89s 2026-01-05 01:35:47.149025 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.28s 2026-01-05 01:35:47.149031 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 4.95s 2026-01-05 01:35:47.149056 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 4.84s 2026-01-05 01:35:47.149063 | orchestrator | keystone : Creating default user role ----------------------------------- 3.60s 2026-01-05 01:35:47.149069 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.57s 2026-01-05 01:35:47.149075 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.54s 2026-01-05 01:35:47.149081 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.70s 2026-01-05 01:35:47.149086 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.46s 2026-01-05 01:35:47.149098 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.45s 2026-01-05 01:35:47.149104 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.43s 2026-01-05 01:35:47.149110 | orchestrator | keystone : Run key distribution ----------------------------------------- 2.25s 2026-01-05 01:35:47.149115 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.83s 2026-01-05 01:35:47.149121 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.83s 2026-01-05 01:35:50.272459 | orchestrator | 2026-01-05 01:35:50 | INFO  | Task 6b9a8ab1-e91e-4800-b4d4-0af32cfe5f67 (placement) was prepared for execution. 2026-01-05 01:35:50.272556 | orchestrator | 2026-01-05 01:35:50 | INFO  | It takes a moment until task 6b9a8ab1-e91e-4800-b4d4-0af32cfe5f67 (placement) has been started and output is visible here. 2026-01-05 01:36:27.488333 | orchestrator | 2026-01-05 01:36:27.488453 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 01:36:27.488466 | orchestrator | 2026-01-05 01:36:27.488476 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 01:36:27.488486 | orchestrator | Monday 05 January 2026 01:35:54 +0000 (0:00:00.273) 0:00:00.273 ******** 2026-01-05 01:36:27.488517 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:36:27.488527 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:36:27.488536 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:36:27.488544 | orchestrator | 2026-01-05 01:36:27.488552 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 01:36:27.488561 | orchestrator | Monday 05 January 2026 01:35:54 +0000 (0:00:00.325) 0:00:00.599 ******** 2026-01-05 01:36:27.488571 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-01-05 01:36:27.488580 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-01-05 01:36:27.488588 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-01-05 01:36:27.488596 | orchestrator | 2026-01-05 01:36:27.488603 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-01-05 01:36:27.488611 | orchestrator | 2026-01-05 01:36:27.488619 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-01-05 01:36:27.488628 | orchestrator | Monday 05 January 2026 01:35:55 +0000 (0:00:00.455) 0:00:01.055 ******** 2026-01-05 01:36:27.488637 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:36:27.488648 | orchestrator | 2026-01-05 01:36:27.488656 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-01-05 01:36:27.488665 | orchestrator | Monday 05 January 2026 01:35:55 +0000 (0:00:00.558) 0:00:01.614 ******** 2026-01-05 01:36:27.488673 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-01-05 01:36:27.488682 | orchestrator | 2026-01-05 01:36:27.488690 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-01-05 01:36:27.488698 | orchestrator | Monday 05 January 2026 01:36:00 +0000 (0:00:04.162) 0:00:05.776 ******** 2026-01-05 01:36:27.488707 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-01-05 01:36:27.488716 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-01-05 01:36:27.488724 | orchestrator | 2026-01-05 01:36:27.488732 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-01-05 01:36:27.488740 | orchestrator | Monday 05 January 2026 01:36:07 +0000 (0:00:07.245) 0:00:13.022 ******** 2026-01-05 01:36:27.488749 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-01-05 01:36:27.488757 | orchestrator | 2026-01-05 01:36:27.488766 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-01-05 01:36:27.488774 | orchestrator | Monday 05 January 2026 01:36:11 +0000 (0:00:04.122) 0:00:17.144 ******** 2026-01-05 01:36:27.488782 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-05 01:36:27.488789 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-01-05 01:36:27.488798 | orchestrator | 2026-01-05 01:36:27.488806 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-01-05 01:36:27.488814 | orchestrator | Monday 05 January 2026 01:36:15 +0000 (0:00:04.245) 0:00:21.390 ******** 2026-01-05 01:36:27.488823 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-05 01:36:27.488831 | orchestrator | 2026-01-05 01:36:27.488839 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-01-05 01:36:27.488847 | orchestrator | Monday 05 January 2026 01:36:19 +0000 (0:00:03.433) 0:00:24.823 ******** 2026-01-05 01:36:27.488855 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-01-05 01:36:27.488862 | orchestrator | 2026-01-05 01:36:27.488871 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-01-05 01:36:27.488880 | orchestrator | Monday 05 January 2026 01:36:23 +0000 (0:00:04.017) 0:00:28.841 ******** 2026-01-05 01:36:27.488889 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:36:27.488898 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:36:27.488907 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:36:27.488916 | orchestrator | 2026-01-05 01:36:27.488924 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-01-05 01:36:27.488941 | orchestrator | Monday 05 January 2026 01:36:23 +0000 (0:00:00.333) 0:00:29.174 ******** 2026-01-05 01:36:27.488969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-05 01:36:27.489003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-05 01:36:27.489013 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-05 01:36:27.489022 | orchestrator | 2026-01-05 01:36:27.489031 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-01-05 01:36:27.489095 | orchestrator | Monday 05 January 2026 01:36:24 +0000 (0:00:01.157) 0:00:30.332 ******** 2026-01-05 01:36:27.489104 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:36:27.489113 | orchestrator | 2026-01-05 01:36:27.489122 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-01-05 01:36:27.489130 | orchestrator | Monday 05 January 2026 01:36:24 +0000 (0:00:00.336) 0:00:30.668 ******** 2026-01-05 01:36:27.489139 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:36:27.489147 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:36:27.489155 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:36:27.489163 | orchestrator | 2026-01-05 01:36:27.489172 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-01-05 01:36:27.489180 | orchestrator | Monday 05 January 2026 01:36:25 +0000 (0:00:00.301) 0:00:30.970 ******** 2026-01-05 01:36:27.489189 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:36:27.489212 | orchestrator | 2026-01-05 01:36:27.489220 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-01-05 01:36:27.489228 | orchestrator | Monday 05 January 2026 01:36:25 +0000 (0:00:00.557) 0:00:31.527 ******** 2026-01-05 01:36:27.489244 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-05 01:36:27.489264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-05 01:36:30.375897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-05 01:36:30.375995 | orchestrator | 2026-01-05 01:36:30.376006 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-01-05 01:36:30.376013 | orchestrator | Monday 05 January 2026 01:36:27 +0000 (0:00:01.664) 0:00:33.192 ******** 2026-01-05 01:36:30.376021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-05 01:36:30.376098 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:36:30.376112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-05 01:36:30.376119 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:36:30.376139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-05 01:36:30.376147 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:36:30.376153 | orchestrator | 2026-01-05 01:36:30.376160 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-01-05 01:36:30.376180 | orchestrator | Monday 05 January 2026 01:36:27 +0000 (0:00:00.511) 0:00:33.704 ******** 2026-01-05 01:36:30.376187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-05 01:36:30.376194 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:36:30.376201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-05 01:36:30.376213 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:36:30.376220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-05 01:36:30.376226 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:36:30.376232 | orchestrator | 2026-01-05 01:36:30.376239 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-01-05 01:36:30.376249 | orchestrator | Monday 05 January 2026 01:36:28 +0000 (0:00:00.713) 0:00:34.417 ******** 2026-01-05 01:36:30.376256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-05 01:36:30.376268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-05 01:36:37.709247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-05 01:36:37.709378 | orchestrator | 2026-01-05 01:36:37.709397 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-01-05 01:36:37.709410 | orchestrator | Monday 05 January 2026 01:36:30 +0000 (0:00:01.667) 0:00:36.085 ******** 2026-01-05 01:36:37.709422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-05 01:36:37.709448 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-05 01:36:37.709460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-05 01:36:37.709472 | orchestrator | 2026-01-05 01:36:37.709484 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-01-05 01:36:37.709495 | orchestrator | Monday 05 January 2026 01:36:32 +0000 (0:00:02.330) 0:00:38.415 ******** 2026-01-05 01:36:37.709524 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-01-05 01:36:37.709537 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-01-05 01:36:37.709555 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-01-05 01:36:37.709566 | orchestrator | 2026-01-05 01:36:37.709576 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-01-05 01:36:37.709587 | orchestrator | Monday 05 January 2026 01:36:34 +0000 (0:00:01.544) 0:00:39.959 ******** 2026-01-05 01:36:37.709598 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:36:37.709610 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:36:37.709621 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:36:37.709631 | orchestrator | 2026-01-05 01:36:37.709642 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-01-05 01:36:37.709653 | orchestrator | Monday 05 January 2026 01:36:35 +0000 (0:00:01.541) 0:00:41.501 ******** 2026-01-05 01:36:37.709665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-05 01:36:37.709677 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:36:37.709696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-05 01:36:37.709710 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:36:37.709723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-05 01:36:37.709736 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:36:37.709748 | orchestrator | 2026-01-05 01:36:37.709759 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-01-05 01:36:37.709777 | orchestrator | Monday 05 January 2026 01:36:36 +0000 (0:00:00.757) 0:00:42.258 ******** 2026-01-05 01:36:37.709798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-05 01:37:02.489633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-05 01:37:02.489787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-05 01:37:02.489814 | orchestrator | 2026-01-05 01:37:02.489833 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-01-05 01:37:02.489850 | orchestrator | Monday 05 January 2026 01:36:37 +0000 (0:00:01.162) 0:00:43.420 ******** 2026-01-05 01:37:02.489865 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:37:02.489881 | orchestrator | 2026-01-05 01:37:02.489897 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-01-05 01:37:02.489913 | orchestrator | Monday 05 January 2026 01:36:39 +0000 (0:00:02.234) 0:00:45.655 ******** 2026-01-05 01:37:02.489929 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:37:02.489943 | orchestrator | 2026-01-05 01:37:02.489958 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-01-05 01:37:02.489972 | orchestrator | Monday 05 January 2026 01:36:42 +0000 (0:00:02.446) 0:00:48.101 ******** 2026-01-05 01:37:02.489988 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:37:02.490002 | orchestrator | 2026-01-05 01:37:02.490063 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-01-05 01:37:02.490085 | orchestrator | Monday 05 January 2026 01:36:56 +0000 (0:00:14.585) 0:01:02.687 ******** 2026-01-05 01:37:02.490166 | orchestrator | 2026-01-05 01:37:02.490183 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-01-05 01:37:02.490200 | orchestrator | Monday 05 January 2026 01:36:57 +0000 (0:00:00.074) 0:01:02.761 ******** 2026-01-05 01:37:02.490216 | orchestrator | 2026-01-05 01:37:02.490239 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-01-05 01:37:02.490265 | orchestrator | Monday 05 January 2026 01:36:57 +0000 (0:00:00.069) 0:01:02.831 ******** 2026-01-05 01:37:02.490292 | orchestrator | 2026-01-05 01:37:02.490319 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-01-05 01:37:02.490334 | orchestrator | Monday 05 January 2026 01:36:57 +0000 (0:00:00.068) 0:01:02.900 ******** 2026-01-05 01:37:02.490349 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:37:02.490365 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:37:02.490386 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:37:02.490401 | orchestrator | 2026-01-05 01:37:02.490422 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:37:02.490444 | orchestrator | testbed-node-0 : ok=21  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-05 01:37:02.490460 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-05 01:37:02.490478 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-05 01:37:02.490494 | orchestrator | 2026-01-05 01:37:02.490512 | orchestrator | 2026-01-05 01:37:02.490527 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:37:02.490543 | orchestrator | Monday 05 January 2026 01:37:02 +0000 (0:00:04.897) 0:01:07.797 ******** 2026-01-05 01:37:02.490557 | orchestrator | =============================================================================== 2026-01-05 01:37:02.490572 | orchestrator | placement : Running placement bootstrap container ---------------------- 14.59s 2026-01-05 01:37:02.490612 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 7.25s 2026-01-05 01:37:02.490628 | orchestrator | placement : Restart placement-api container ----------------------------- 4.90s 2026-01-05 01:37:02.490643 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.25s 2026-01-05 01:37:02.490658 | orchestrator | service-ks-register : placement | Creating services --------------------- 4.16s 2026-01-05 01:37:02.490672 | orchestrator | service-ks-register : placement | Creating projects --------------------- 4.12s 2026-01-05 01:37:02.490686 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.02s 2026-01-05 01:37:02.490700 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.43s 2026-01-05 01:37:02.490715 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.45s 2026-01-05 01:37:02.490729 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.33s 2026-01-05 01:37:02.490743 | orchestrator | placement : Creating placement databases -------------------------------- 2.23s 2026-01-05 01:37:02.490759 | orchestrator | placement : Copying over config.json files for services ----------------- 1.67s 2026-01-05 01:37:02.490774 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.66s 2026-01-05 01:37:02.490789 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.54s 2026-01-05 01:37:02.490803 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.54s 2026-01-05 01:37:02.490814 | orchestrator | placement : Check placement containers ---------------------------------- 1.16s 2026-01-05 01:37:02.490823 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.16s 2026-01-05 01:37:02.490832 | orchestrator | placement : Copying over existing policy file --------------------------- 0.76s 2026-01-05 01:37:02.490840 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.71s 2026-01-05 01:37:02.490861 | orchestrator | placement : include_tasks ----------------------------------------------- 0.56s 2026-01-05 01:37:04.868246 | orchestrator | 2026-01-05 01:37:04 | INFO  | Task efd62b0f-e37d-4bcd-b009-b9042ed52cd5 (neutron) was prepared for execution. 2026-01-05 01:37:04.868300 | orchestrator | 2026-01-05 01:37:04 | INFO  | It takes a moment until task efd62b0f-e37d-4bcd-b009-b9042ed52cd5 (neutron) has been started and output is visible here. 2026-01-05 01:37:55.429951 | orchestrator | 2026-01-05 01:37:55.430211 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 01:37:55.430232 | orchestrator | 2026-01-05 01:37:55.430242 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 01:37:55.430252 | orchestrator | Monday 05 January 2026 01:37:09 +0000 (0:00:00.264) 0:00:00.264 ******** 2026-01-05 01:37:55.430261 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:37:55.430271 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:37:55.430280 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:37:55.430288 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:37:55.430297 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:37:55.430305 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:37:55.430314 | orchestrator | 2026-01-05 01:37:55.430323 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 01:37:55.430332 | orchestrator | Monday 05 January 2026 01:37:09 +0000 (0:00:00.722) 0:00:00.987 ******** 2026-01-05 01:37:55.430342 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-01-05 01:37:55.430350 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-01-05 01:37:55.430359 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-01-05 01:37:55.430368 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-01-05 01:37:55.430376 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-01-05 01:37:55.430385 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-01-05 01:37:55.430394 | orchestrator | 2026-01-05 01:37:55.430402 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-01-05 01:37:55.430411 | orchestrator | 2026-01-05 01:37:55.430420 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-05 01:37:55.430428 | orchestrator | Monday 05 January 2026 01:37:10 +0000 (0:00:00.636) 0:00:01.623 ******** 2026-01-05 01:37:55.430438 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:37:55.430448 | orchestrator | 2026-01-05 01:37:55.430457 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-01-05 01:37:55.430466 | orchestrator | Monday 05 January 2026 01:37:11 +0000 (0:00:01.271) 0:00:02.895 ******** 2026-01-05 01:37:55.430476 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:37:55.430486 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:37:55.430496 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:37:55.430507 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:37:55.430517 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:37:55.430526 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:37:55.430537 | orchestrator | 2026-01-05 01:37:55.430548 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-01-05 01:37:55.430558 | orchestrator | Monday 05 January 2026 01:37:13 +0000 (0:00:01.368) 0:00:04.263 ******** 2026-01-05 01:37:55.430568 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:37:55.430578 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:37:55.430588 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:37:55.430598 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:37:55.430609 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:37:55.430618 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:37:55.430628 | orchestrator | 2026-01-05 01:37:55.430639 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-01-05 01:37:55.430650 | orchestrator | Monday 05 January 2026 01:37:14 +0000 (0:00:01.169) 0:00:05.433 ******** 2026-01-05 01:37:55.430684 | orchestrator | ok: [testbed-node-0] => { 2026-01-05 01:37:55.430696 | orchestrator |  "changed": false, 2026-01-05 01:37:55.430706 | orchestrator |  "msg": "All assertions passed" 2026-01-05 01:37:55.430716 | orchestrator | } 2026-01-05 01:37:55.430726 | orchestrator | ok: [testbed-node-1] => { 2026-01-05 01:37:55.430736 | orchestrator |  "changed": false, 2026-01-05 01:37:55.430746 | orchestrator |  "msg": "All assertions passed" 2026-01-05 01:37:55.430756 | orchestrator | } 2026-01-05 01:37:55.430766 | orchestrator | ok: [testbed-node-2] => { 2026-01-05 01:37:55.430776 | orchestrator |  "changed": false, 2026-01-05 01:37:55.430787 | orchestrator |  "msg": "All assertions passed" 2026-01-05 01:37:55.430797 | orchestrator | } 2026-01-05 01:37:55.430807 | orchestrator | ok: [testbed-node-3] => { 2026-01-05 01:37:55.430818 | orchestrator |  "changed": false, 2026-01-05 01:37:55.430828 | orchestrator |  "msg": "All assertions passed" 2026-01-05 01:37:55.430837 | orchestrator | } 2026-01-05 01:37:55.430846 | orchestrator | ok: [testbed-node-4] => { 2026-01-05 01:37:55.430855 | orchestrator |  "changed": false, 2026-01-05 01:37:55.430863 | orchestrator |  "msg": "All assertions passed" 2026-01-05 01:37:55.430872 | orchestrator | } 2026-01-05 01:37:55.430881 | orchestrator | ok: [testbed-node-5] => { 2026-01-05 01:37:55.430889 | orchestrator |  "changed": false, 2026-01-05 01:37:55.430898 | orchestrator |  "msg": "All assertions passed" 2026-01-05 01:37:55.430907 | orchestrator | } 2026-01-05 01:37:55.430915 | orchestrator | 2026-01-05 01:37:55.430924 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-01-05 01:37:55.430933 | orchestrator | Monday 05 January 2026 01:37:15 +0000 (0:00:00.811) 0:00:06.245 ******** 2026-01-05 01:37:55.430941 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:37:55.430950 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:37:55.430958 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:37:55.430967 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:37:55.430976 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:37:55.430989 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:37:55.431003 | orchestrator | 2026-01-05 01:37:55.431018 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-01-05 01:37:55.431033 | orchestrator | Monday 05 January 2026 01:37:15 +0000 (0:00:00.631) 0:00:06.876 ******** 2026-01-05 01:37:55.431047 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-01-05 01:37:55.431061 | orchestrator | 2026-01-05 01:37:55.431075 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-01-05 01:37:55.431089 | orchestrator | Monday 05 January 2026 01:37:20 +0000 (0:00:04.309) 0:00:11.186 ******** 2026-01-05 01:37:55.431102 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-01-05 01:37:55.431159 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-01-05 01:37:55.431175 | orchestrator | 2026-01-05 01:37:55.431212 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-01-05 01:37:55.431227 | orchestrator | Monday 05 January 2026 01:37:27 +0000 (0:00:06.964) 0:00:18.150 ******** 2026-01-05 01:37:55.431241 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-05 01:37:55.431255 | orchestrator | 2026-01-05 01:37:55.431269 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-01-05 01:37:55.431282 | orchestrator | Monday 05 January 2026 01:37:30 +0000 (0:00:03.429) 0:00:21.579 ******** 2026-01-05 01:37:55.431296 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-05 01:37:55.431310 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-01-05 01:37:55.431326 | orchestrator | 2026-01-05 01:37:55.431341 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-01-05 01:37:55.431358 | orchestrator | Monday 05 January 2026 01:37:34 +0000 (0:00:04.178) 0:00:25.758 ******** 2026-01-05 01:37:55.431386 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-05 01:37:55.431400 | orchestrator | 2026-01-05 01:37:55.431410 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-01-05 01:37:55.431419 | orchestrator | Monday 05 January 2026 01:37:38 +0000 (0:00:03.413) 0:00:29.171 ******** 2026-01-05 01:37:55.431428 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-01-05 01:37:55.431436 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-01-05 01:37:55.431445 | orchestrator | 2026-01-05 01:37:55.431453 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-05 01:37:55.431462 | orchestrator | Monday 05 January 2026 01:37:46 +0000 (0:00:08.304) 0:00:37.475 ******** 2026-01-05 01:37:55.431470 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:37:55.431479 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:37:55.431488 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:37:55.431496 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:37:55.431505 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:37:55.431513 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:37:55.431522 | orchestrator | 2026-01-05 01:37:55.431530 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-01-05 01:37:55.431539 | orchestrator | Monday 05 January 2026 01:37:47 +0000 (0:00:00.829) 0:00:38.305 ******** 2026-01-05 01:37:55.431548 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:37:55.431556 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:37:55.431565 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:37:55.431573 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:37:55.431582 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:37:55.431590 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:37:55.431599 | orchestrator | 2026-01-05 01:37:55.431607 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-01-05 01:37:55.431636 | orchestrator | Monday 05 January 2026 01:37:49 +0000 (0:00:02.462) 0:00:40.767 ******** 2026-01-05 01:37:55.431645 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:37:55.431654 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:37:55.431663 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:37:55.431671 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:37:55.431680 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:37:55.431689 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:37:55.431697 | orchestrator | 2026-01-05 01:37:55.431706 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-01-05 01:37:55.431714 | orchestrator | Monday 05 January 2026 01:37:50 +0000 (0:00:00.971) 0:00:41.739 ******** 2026-01-05 01:37:55.431723 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:37:55.431731 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:37:55.431740 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:37:55.431748 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:37:55.431757 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:37:55.431765 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:37:55.431774 | orchestrator | 2026-01-05 01:37:55.431782 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-01-05 01:37:55.431791 | orchestrator | Monday 05 January 2026 01:37:53 +0000 (0:00:02.482) 0:00:44.221 ******** 2026-01-05 01:37:55.431804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-05 01:37:55.431838 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-05 01:38:00.747311 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-05 01:38:00.747406 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-05 01:38:00.747426 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-05 01:38:00.747433 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-05 01:38:00.747463 | orchestrator | 2026-01-05 01:38:00.747470 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-01-05 01:38:00.747475 | orchestrator | Monday 05 January 2026 01:37:55 +0000 (0:00:02.297) 0:00:46.519 ******** 2026-01-05 01:38:00.747479 | orchestrator | [WARNING]: Skipped 2026-01-05 01:38:00.747485 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-01-05 01:38:00.747490 | orchestrator | due to this access issue: 2026-01-05 01:38:00.747495 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-01-05 01:38:00.747499 | orchestrator | a directory 2026-01-05 01:38:00.747514 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 01:38:00.747518 | orchestrator | 2026-01-05 01:38:00.747522 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-05 01:38:00.747526 | orchestrator | Monday 05 January 2026 01:37:56 +0000 (0:00:00.821) 0:00:47.340 ******** 2026-01-05 01:38:00.747531 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:38:00.747536 | orchestrator | 2026-01-05 01:38:00.747540 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-01-05 01:38:00.747557 | orchestrator | Monday 05 January 2026 01:37:57 +0000 (0:00:01.291) 0:00:48.632 ******** 2026-01-05 01:38:00.747561 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-05 01:38:00.747566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-05 01:38:00.747570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-05 01:38:00.747578 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-05 01:38:00.747589 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-05 01:38:05.495710 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-05 01:38:05.495797 | orchestrator | 2026-01-05 01:38:05.495806 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-01-05 01:38:05.495813 | orchestrator | Monday 05 January 2026 01:38:00 +0000 (0:00:03.202) 0:00:51.835 ******** 2026-01-05 01:38:05.495820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-05 01:38:05.495828 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:38:05.495857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-05 01:38:05.495863 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:38:05.495880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-05 01:38:05.495886 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:38:05.495905 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 01:38:05.495911 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:38:05.495917 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 01:38:05.495923 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:38:05.495928 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 01:38:05.495938 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:38:05.495944 | orchestrator | 2026-01-05 01:38:05.495950 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-01-05 01:38:05.495956 | orchestrator | Monday 05 January 2026 01:38:02 +0000 (0:00:02.010) 0:00:53.845 ******** 2026-01-05 01:38:05.495961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-05 01:38:05.495967 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:38:05.495981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-05 01:38:10.842265 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:38:10.842358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-05 01:38:10.842370 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:38:10.842379 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 01:38:10.842405 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:38:10.842411 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 01:38:10.842418 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:38:10.842424 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 01:38:10.842430 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:38:10.842436 | orchestrator | 2026-01-05 01:38:10.842455 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-01-05 01:38:10.842462 | orchestrator | Monday 05 January 2026 01:38:05 +0000 (0:00:02.740) 0:00:56.585 ******** 2026-01-05 01:38:10.842469 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:38:10.842475 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:38:10.842481 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:38:10.842487 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:38:10.842493 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:38:10.842499 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:38:10.842509 | orchestrator | 2026-01-05 01:38:10.842519 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-01-05 01:38:10.842529 | orchestrator | Monday 05 January 2026 01:38:07 +0000 (0:00:02.435) 0:00:59.021 ******** 2026-01-05 01:38:10.842539 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:38:10.842548 | orchestrator | 2026-01-05 01:38:10.842557 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-01-05 01:38:10.842582 | orchestrator | Monday 05 January 2026 01:38:08 +0000 (0:00:00.158) 0:00:59.179 ******** 2026-01-05 01:38:10.842592 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:38:10.842601 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:38:10.842610 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:38:10.842618 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:38:10.842628 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:38:10.842638 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:38:10.842647 | orchestrator | 2026-01-05 01:38:10.842657 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-01-05 01:38:10.842666 | orchestrator | Monday 05 January 2026 01:38:08 +0000 (0:00:00.625) 0:00:59.805 ******** 2026-01-05 01:38:10.842687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-05 01:38:10.842699 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:38:10.842710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-05 01:38:10.842720 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:38:10.842729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-05 01:38:10.842740 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:38:10.842755 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 01:38:10.842765 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:38:10.842785 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 01:38:19.191518 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:38:19.191659 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 01:38:19.191679 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:38:19.191690 | orchestrator | 2026-01-05 01:38:19.191701 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-01-05 01:38:19.191711 | orchestrator | Monday 05 January 2026 01:38:10 +0000 (0:00:02.117) 0:01:01.923 ******** 2026-01-05 01:38:19.191723 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-05 01:38:19.191754 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-05 01:38:19.191765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-05 01:38:19.191820 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-05 01:38:19.191833 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-05 01:38:19.191843 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-05 01:38:19.191853 | orchestrator | 2026-01-05 01:38:19.191864 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-01-05 01:38:19.191874 | orchestrator | Monday 05 January 2026 01:38:13 +0000 (0:00:03.168) 0:01:05.091 ******** 2026-01-05 01:38:19.191890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-05 01:38:19.191902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-05 01:38:19.191973 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-05 01:38:24.091941 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-05 01:38:24.092028 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-05 01:38:24.092049 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-05 01:38:24.092075 | orchestrator | 2026-01-05 01:38:24.092085 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-01-05 01:38:24.092094 | orchestrator | Monday 05 January 2026 01:38:19 +0000 (0:00:05.189) 0:01:10.281 ******** 2026-01-05 01:38:24.092102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-05 01:38:24.092112 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:38:24.092138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-05 01:38:24.092147 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:38:24.092242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-05 01:38:24.092253 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:38:24.092266 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 01:38:24.092278 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:38:24.092283 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 01:38:24.092287 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:38:24.092292 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 01:38:24.092297 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:38:24.092301 | orchestrator | 2026-01-05 01:38:24.092306 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-01-05 01:38:24.092311 | orchestrator | Monday 05 January 2026 01:38:21 +0000 (0:00:02.157) 0:01:12.438 ******** 2026-01-05 01:38:24.092316 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:38:24.092321 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:38:24.092328 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:38:24.092336 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:38:24.092342 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:38:24.092347 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:38:24.092355 | orchestrator | 2026-01-05 01:38:24.092361 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-01-05 01:38:24.092371 | orchestrator | Monday 05 January 2026 01:38:24 +0000 (0:00:02.741) 0:01:15.179 ******** 2026-01-05 01:38:43.519995 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 01:38:43.520102 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:38:43.520117 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 01:38:43.520150 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:38:43.520175 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 01:38:43.520233 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:38:43.520244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-05 01:38:43.520273 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-05 01:38:43.520283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-05 01:38:43.520293 | orchestrator | 2026-01-05 01:38:43.520310 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-01-05 01:38:43.520321 | orchestrator | Monday 05 January 2026 01:38:27 +0000 (0:00:03.467) 0:01:18.647 ******** 2026-01-05 01:38:43.520330 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:38:43.520338 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:38:43.520347 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:38:43.520356 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:38:43.520364 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:38:43.520373 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:38:43.520381 | orchestrator | 2026-01-05 01:38:43.520390 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-01-05 01:38:43.520399 | orchestrator | Monday 05 January 2026 01:38:29 +0000 (0:00:02.163) 0:01:20.810 ******** 2026-01-05 01:38:43.520407 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:38:43.520416 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:38:43.520425 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:38:43.520433 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:38:43.520442 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:38:43.520450 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:38:43.520459 | orchestrator | 2026-01-05 01:38:43.520468 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-01-05 01:38:43.520481 | orchestrator | Monday 05 January 2026 01:38:32 +0000 (0:00:02.547) 0:01:23.358 ******** 2026-01-05 01:38:43.520490 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:38:43.520499 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:38:43.520508 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:38:43.520519 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:38:43.520529 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:38:43.520538 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:38:43.520548 | orchestrator | 2026-01-05 01:38:43.520559 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-01-05 01:38:43.520570 | orchestrator | Monday 05 January 2026 01:38:34 +0000 (0:00:02.064) 0:01:25.422 ******** 2026-01-05 01:38:43.520580 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:38:43.520590 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:38:43.520600 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:38:43.520610 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:38:43.520620 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:38:43.520630 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:38:43.520640 | orchestrator | 2026-01-05 01:38:43.520651 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-01-05 01:38:43.520662 | orchestrator | Monday 05 January 2026 01:38:36 +0000 (0:00:02.656) 0:01:28.079 ******** 2026-01-05 01:38:43.520672 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:38:43.520682 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:38:43.520693 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:38:43.520702 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:38:43.520713 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:38:43.520723 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:38:43.520733 | orchestrator | 2026-01-05 01:38:43.520744 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-01-05 01:38:43.520754 | orchestrator | Monday 05 January 2026 01:38:38 +0000 (0:00:01.997) 0:01:30.076 ******** 2026-01-05 01:38:43.520764 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:38:43.520774 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:38:43.520784 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:38:43.520794 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:38:43.520804 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:38:43.520814 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:38:43.520825 | orchestrator | 2026-01-05 01:38:43.520835 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-01-05 01:38:43.520845 | orchestrator | Monday 05 January 2026 01:38:41 +0000 (0:00:02.331) 0:01:32.407 ******** 2026-01-05 01:38:43.520862 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-05 01:38:43.520873 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:38:43.520883 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-05 01:38:43.520978 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:38:43.520990 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-05 01:38:43.520999 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:38:43.521046 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-05 01:38:43.521057 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:38:43.521074 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-05 01:38:47.885072 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:38:47.885171 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-05 01:38:47.885217 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:38:47.885229 | orchestrator | 2026-01-05 01:38:47.885239 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-01-05 01:38:47.885249 | orchestrator | Monday 05 January 2026 01:38:43 +0000 (0:00:02.195) 0:01:34.602 ******** 2026-01-05 01:38:47.885261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-05 01:38:47.885275 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:38:47.885302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-05 01:38:47.885311 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:38:47.885321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-05 01:38:47.885352 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:38:47.885362 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 01:38:47.885372 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:38:47.885397 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 01:38:47.885407 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:38:47.885416 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 01:38:47.885425 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:38:47.885436 | orchestrator | 2026-01-05 01:38:47.885452 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-01-05 01:38:47.885466 | orchestrator | Monday 05 January 2026 01:38:45 +0000 (0:00:02.332) 0:01:36.935 ******** 2026-01-05 01:38:47.885488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-05 01:38:47.885513 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:38:47.885528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-05 01:38:47.885544 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:38:47.885570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-05 01:39:13.260707 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:39:13.260850 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 01:39:13.260872 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:39:13.260904 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 01:39:13.260917 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:39:13.260929 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 01:39:13.260963 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:39:13.260975 | orchestrator | 2026-01-05 01:39:13.260988 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-01-05 01:39:13.261000 | orchestrator | Monday 05 January 2026 01:38:47 +0000 (0:00:02.039) 0:01:38.975 ******** 2026-01-05 01:39:13.261011 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:39:13.261022 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:39:13.261032 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:39:13.261043 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:39:13.261054 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:39:13.261070 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:39:13.261093 | orchestrator | 2026-01-05 01:39:13.261123 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-01-05 01:39:13.261141 | orchestrator | Monday 05 January 2026 01:38:50 +0000 (0:00:02.297) 0:01:41.272 ******** 2026-01-05 01:39:13.261158 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:39:13.261176 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:39:13.261192 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:39:13.261257 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:39:13.261280 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:39:13.261299 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:39:13.261317 | orchestrator | 2026-01-05 01:39:13.261337 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-01-05 01:39:13.261355 | orchestrator | Monday 05 January 2026 01:38:53 +0000 (0:00:03.500) 0:01:44.773 ******** 2026-01-05 01:39:13.261382 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:39:13.261403 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:39:13.261419 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:39:13.261434 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:39:13.261451 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:39:13.261467 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:39:13.261484 | orchestrator | 2026-01-05 01:39:13.261500 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-01-05 01:39:13.261517 | orchestrator | Monday 05 January 2026 01:38:55 +0000 (0:00:02.327) 0:01:47.100 ******** 2026-01-05 01:39:13.261534 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:39:13.261550 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:39:13.261567 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:39:13.261586 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:39:13.261601 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:39:13.261617 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:39:13.261633 | orchestrator | 2026-01-05 01:39:13.261651 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-01-05 01:39:13.261696 | orchestrator | Monday 05 January 2026 01:38:58 +0000 (0:00:02.146) 0:01:49.247 ******** 2026-01-05 01:39:13.261715 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:39:13.261732 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:39:13.261750 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:39:13.261766 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:39:13.261783 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:39:13.261800 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:39:13.261820 | orchestrator | 2026-01-05 01:39:13.261839 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-01-05 01:39:13.261858 | orchestrator | Monday 05 January 2026 01:39:00 +0000 (0:00:02.184) 0:01:51.431 ******** 2026-01-05 01:39:13.261894 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:39:13.261906 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:39:13.261916 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:39:13.261927 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:39:13.261938 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:39:13.261948 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:39:13.261959 | orchestrator | 2026-01-05 01:39:13.261970 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-01-05 01:39:13.261981 | orchestrator | Monday 05 January 2026 01:39:02 +0000 (0:00:02.008) 0:01:53.440 ******** 2026-01-05 01:39:13.261992 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:39:13.262002 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:39:13.262013 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:39:13.262107 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:39:13.262118 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:39:13.262128 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:39:13.262139 | orchestrator | 2026-01-05 01:39:13.262150 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-01-05 01:39:13.262161 | orchestrator | Monday 05 January 2026 01:39:04 +0000 (0:00:02.185) 0:01:55.626 ******** 2026-01-05 01:39:13.262172 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:39:13.262189 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:39:13.262276 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:39:13.262297 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:39:13.262313 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:39:13.262342 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:39:13.262361 | orchestrator | 2026-01-05 01:39:13.262380 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-01-05 01:39:13.262398 | orchestrator | Monday 05 January 2026 01:39:06 +0000 (0:00:02.097) 0:01:57.724 ******** 2026-01-05 01:39:13.262416 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:39:13.262433 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:39:13.262451 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:39:13.262470 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:39:13.262734 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:39:13.262871 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:39:13.262899 | orchestrator | 2026-01-05 01:39:13.262919 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-01-05 01:39:13.262938 | orchestrator | Monday 05 January 2026 01:39:08 +0000 (0:00:02.260) 0:01:59.984 ******** 2026-01-05 01:39:13.262957 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-05 01:39:13.262978 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:39:13.262997 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-05 01:39:13.263012 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:39:13.263023 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-05 01:39:13.263034 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:39:13.263046 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-05 01:39:13.263056 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:39:13.263067 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-05 01:39:13.263078 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:39:13.263089 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-05 01:39:13.263100 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:39:13.263110 | orchestrator | 2026-01-05 01:39:13.263122 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-01-05 01:39:13.263132 | orchestrator | Monday 05 January 2026 01:39:11 +0000 (0:00:02.190) 0:02:02.175 ******** 2026-01-05 01:39:13.263145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-05 01:39:13.263304 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:39:13.263343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-05 01:39:16.042120 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:39:16.042291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-05 01:39:16.042309 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:39:16.042320 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 01:39:16.042329 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:39:16.042337 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 01:39:16.042365 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:39:16.042375 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 01:39:16.042381 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:39:16.042386 | orchestrator | 2026-01-05 01:39:16.042392 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-01-05 01:39:16.042398 | orchestrator | Monday 05 January 2026 01:39:13 +0000 (0:00:02.168) 0:02:04.344 ******** 2026-01-05 01:39:16.042419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-05 01:39:16.042430 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-05 01:39:16.042435 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-05 01:39:16.042445 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-05 01:39:16.042451 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-05 01:39:16.042461 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-05 01:41:25.762934 | orchestrator | 2026-01-05 01:41:25.763049 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-05 01:41:25.763065 | orchestrator | Monday 05 January 2026 01:39:16 +0000 (0:00:02.787) 0:02:07.132 ******** 2026-01-05 01:41:25.763074 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:41:25.763083 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:41:25.763106 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:41:25.763114 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:41:25.763121 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:41:25.763129 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:41:25.763136 | orchestrator | 2026-01-05 01:41:25.763144 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-01-05 01:41:25.763152 | orchestrator | Monday 05 January 2026 01:39:16 +0000 (0:00:00.837) 0:02:07.969 ******** 2026-01-05 01:41:25.763160 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:41:25.763167 | orchestrator | 2026-01-05 01:41:25.763174 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-01-05 01:41:25.763182 | orchestrator | Monday 05 January 2026 01:39:19 +0000 (0:00:02.261) 0:02:10.231 ******** 2026-01-05 01:41:25.763189 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:41:25.763217 | orchestrator | 2026-01-05 01:41:25.763225 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-01-05 01:41:25.763233 | orchestrator | Monday 05 January 2026 01:39:21 +0000 (0:00:02.325) 0:02:12.556 ******** 2026-01-05 01:41:25.763240 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:41:25.763247 | orchestrator | 2026-01-05 01:41:25.763254 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-05 01:41:25.763262 | orchestrator | Monday 05 January 2026 01:40:07 +0000 (0:00:45.908) 0:02:58.465 ******** 2026-01-05 01:41:25.763269 | orchestrator | 2026-01-05 01:41:25.763276 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-05 01:41:25.763283 | orchestrator | Monday 05 January 2026 01:40:07 +0000 (0:00:00.075) 0:02:58.541 ******** 2026-01-05 01:41:25.763290 | orchestrator | 2026-01-05 01:41:25.763297 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-05 01:41:25.763304 | orchestrator | Monday 05 January 2026 01:40:07 +0000 (0:00:00.077) 0:02:58.618 ******** 2026-01-05 01:41:25.763311 | orchestrator | 2026-01-05 01:41:25.763319 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-05 01:41:25.763326 | orchestrator | Monday 05 January 2026 01:40:07 +0000 (0:00:00.072) 0:02:58.690 ******** 2026-01-05 01:41:25.763333 | orchestrator | 2026-01-05 01:41:25.763366 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-05 01:41:25.763374 | orchestrator | Monday 05 January 2026 01:40:07 +0000 (0:00:00.237) 0:02:58.928 ******** 2026-01-05 01:41:25.763382 | orchestrator | 2026-01-05 01:41:25.763389 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-05 01:41:25.763396 | orchestrator | Monday 05 January 2026 01:40:07 +0000 (0:00:00.071) 0:02:58.999 ******** 2026-01-05 01:41:25.763406 | orchestrator | 2026-01-05 01:41:25.763414 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-01-05 01:41:25.763423 | orchestrator | Monday 05 January 2026 01:40:07 +0000 (0:00:00.071) 0:02:59.070 ******** 2026-01-05 01:41:25.763431 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:41:25.763440 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:41:25.763448 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:41:25.763457 | orchestrator | 2026-01-05 01:41:25.763466 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-01-05 01:41:25.763474 | orchestrator | Monday 05 January 2026 01:40:30 +0000 (0:00:22.576) 0:03:21.647 ******** 2026-01-05 01:41:25.763483 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:41:25.763491 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:41:25.763499 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:41:25.763508 | orchestrator | 2026-01-05 01:41:25.763517 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:41:25.763528 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-05 01:41:25.763539 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-01-05 01:41:25.763547 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-01-05 01:41:25.763556 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-05 01:41:25.763567 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-05 01:41:25.763580 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-05 01:41:25.763593 | orchestrator | 2026-01-05 01:41:25.763605 | orchestrator | 2026-01-05 01:41:25.763618 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:41:25.763638 | orchestrator | Monday 05 January 2026 01:41:25 +0000 (0:00:54.721) 0:04:16.369 ******** 2026-01-05 01:41:25.763650 | orchestrator | =============================================================================== 2026-01-05 01:41:25.763663 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 54.72s 2026-01-05 01:41:25.763675 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 45.91s 2026-01-05 01:41:25.763688 | orchestrator | neutron : Restart neutron-server container ----------------------------- 22.58s 2026-01-05 01:41:25.763720 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.30s 2026-01-05 01:41:25.763734 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.96s 2026-01-05 01:41:25.763747 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 5.19s 2026-01-05 01:41:25.763765 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 4.31s 2026-01-05 01:41:25.763778 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.18s 2026-01-05 01:41:25.763790 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.50s 2026-01-05 01:41:25.763803 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.47s 2026-01-05 01:41:25.763816 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.43s 2026-01-05 01:41:25.763828 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.41s 2026-01-05 01:41:25.763839 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.20s 2026-01-05 01:41:25.763850 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.17s 2026-01-05 01:41:25.763863 | orchestrator | neutron : Check neutron containers -------------------------------------- 2.79s 2026-01-05 01:41:25.763875 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 2.74s 2026-01-05 01:41:25.763887 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 2.74s 2026-01-05 01:41:25.763899 | orchestrator | neutron : Copying over mlnx_agent.ini ----------------------------------- 2.66s 2026-01-05 01:41:25.763911 | orchestrator | neutron : Copying over openvswitch_agent.ini ---------------------------- 2.55s 2026-01-05 01:41:25.763921 | orchestrator | Setting sysctl values --------------------------------------------------- 2.48s 2026-01-05 01:41:28.119918 | orchestrator | 2026-01-05 01:41:28 | INFO  | Task 7149b9c6-e387-4995-9d13-f69478bb7e3f (nova) was prepared for execution. 2026-01-05 01:41:28.119992 | orchestrator | 2026-01-05 01:41:28 | INFO  | It takes a moment until task 7149b9c6-e387-4995-9d13-f69478bb7e3f (nova) has been started and output is visible here. 2026-01-05 01:43:34.586575 | orchestrator | 2026-01-05 01:43:34.586693 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 01:43:34.586708 | orchestrator | 2026-01-05 01:43:34.586717 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-01-05 01:43:34.586725 | orchestrator | Monday 05 January 2026 01:41:32 +0000 (0:00:00.279) 0:00:00.279 ******** 2026-01-05 01:43:34.586735 | orchestrator | changed: [testbed-manager] 2026-01-05 01:43:34.586756 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:43:34.586765 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:43:34.586773 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:43:34.586780 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:43:34.586789 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:43:34.586797 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:43:34.586805 | orchestrator | 2026-01-05 01:43:34.586815 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 01:43:34.586823 | orchestrator | Monday 05 January 2026 01:41:33 +0000 (0:00:00.870) 0:00:01.150 ******** 2026-01-05 01:43:34.586832 | orchestrator | changed: [testbed-manager] 2026-01-05 01:43:34.586841 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:43:34.586888 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:43:34.586898 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:43:34.586907 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:43:34.586916 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:43:34.586924 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:43:34.586933 | orchestrator | 2026-01-05 01:43:34.586942 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 01:43:34.586951 | orchestrator | Monday 05 January 2026 01:41:34 +0000 (0:00:00.859) 0:00:02.010 ******** 2026-01-05 01:43:34.586960 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-01-05 01:43:34.586969 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-01-05 01:43:34.586978 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-01-05 01:43:34.586987 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-01-05 01:43:34.586995 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-01-05 01:43:34.587004 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-01-05 01:43:34.587012 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-01-05 01:43:34.587021 | orchestrator | 2026-01-05 01:43:34.587030 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-01-05 01:43:34.587039 | orchestrator | 2026-01-05 01:43:34.587047 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-01-05 01:43:34.587055 | orchestrator | Monday 05 January 2026 01:41:34 +0000 (0:00:00.720) 0:00:02.731 ******** 2026-01-05 01:43:34.587063 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:43:34.587071 | orchestrator | 2026-01-05 01:43:34.587079 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-01-05 01:43:34.587087 | orchestrator | Monday 05 January 2026 01:41:35 +0000 (0:00:00.795) 0:00:03.527 ******** 2026-01-05 01:43:34.587096 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-01-05 01:43:34.587104 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-01-05 01:43:34.587113 | orchestrator | 2026-01-05 01:43:34.587122 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-01-05 01:43:34.587130 | orchestrator | Monday 05 January 2026 01:41:40 +0000 (0:00:04.507) 0:00:08.035 ******** 2026-01-05 01:43:34.587139 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-05 01:43:34.587148 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-05 01:43:34.587156 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:43:34.587165 | orchestrator | 2026-01-05 01:43:34.587173 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-01-05 01:43:34.587182 | orchestrator | Monday 05 January 2026 01:41:44 +0000 (0:00:04.461) 0:00:12.496 ******** 2026-01-05 01:43:34.587191 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:43:34.587199 | orchestrator | 2026-01-05 01:43:34.587208 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-01-05 01:43:34.587217 | orchestrator | Monday 05 January 2026 01:41:45 +0000 (0:00:00.663) 0:00:13.160 ******** 2026-01-05 01:43:34.587225 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:43:34.587234 | orchestrator | 2026-01-05 01:43:34.587242 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-01-05 01:43:34.587292 | orchestrator | Monday 05 January 2026 01:41:46 +0000 (0:00:01.370) 0:00:14.530 ******** 2026-01-05 01:43:34.587302 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:43:34.587310 | orchestrator | 2026-01-05 01:43:34.587320 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-01-05 01:43:34.587329 | orchestrator | Monday 05 January 2026 01:41:49 +0000 (0:00:02.821) 0:00:17.351 ******** 2026-01-05 01:43:34.587337 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:43:34.587345 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:43:34.587354 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:43:34.587362 | orchestrator | 2026-01-05 01:43:34.587370 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-01-05 01:43:34.587388 | orchestrator | Monday 05 January 2026 01:41:49 +0000 (0:00:00.292) 0:00:17.644 ******** 2026-01-05 01:43:34.587397 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:43:34.587406 | orchestrator | 2026-01-05 01:43:34.587415 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-01-05 01:43:34.587423 | orchestrator | Monday 05 January 2026 01:42:23 +0000 (0:00:34.135) 0:00:51.779 ******** 2026-01-05 01:43:34.587432 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:43:34.587440 | orchestrator | 2026-01-05 01:43:34.587449 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-01-05 01:43:34.587458 | orchestrator | Monday 05 January 2026 01:42:40 +0000 (0:00:16.289) 0:01:08.069 ******** 2026-01-05 01:43:34.587466 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:43:34.587531 | orchestrator | 2026-01-05 01:43:34.587540 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-01-05 01:43:34.587549 | orchestrator | Monday 05 January 2026 01:42:53 +0000 (0:00:12.854) 0:01:20.924 ******** 2026-01-05 01:43:34.587581 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:43:34.587591 | orchestrator | 2026-01-05 01:43:34.587600 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-01-05 01:43:34.587609 | orchestrator | Monday 05 January 2026 01:42:53 +0000 (0:00:00.725) 0:01:21.649 ******** 2026-01-05 01:43:34.587617 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:43:34.587626 | orchestrator | 2026-01-05 01:43:34.587634 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-01-05 01:43:34.587643 | orchestrator | Monday 05 January 2026 01:42:54 +0000 (0:00:00.475) 0:01:22.124 ******** 2026-01-05 01:43:34.587652 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:43:34.587661 | orchestrator | 2026-01-05 01:43:34.587669 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-01-05 01:43:34.587678 | orchestrator | Monday 05 January 2026 01:42:55 +0000 (0:00:00.755) 0:01:22.881 ******** 2026-01-05 01:43:34.587687 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:43:34.587696 | orchestrator | 2026-01-05 01:43:34.587704 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-01-05 01:43:34.587713 | orchestrator | Monday 05 January 2026 01:43:14 +0000 (0:00:19.455) 0:01:42.336 ******** 2026-01-05 01:43:34.587721 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:43:34.587729 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:43:34.587738 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:43:34.587746 | orchestrator | 2026-01-05 01:43:34.587755 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-01-05 01:43:34.587763 | orchestrator | 2026-01-05 01:43:34.587772 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-01-05 01:43:34.587781 | orchestrator | Monday 05 January 2026 01:43:14 +0000 (0:00:00.324) 0:01:42.661 ******** 2026-01-05 01:43:34.587789 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:43:34.587797 | orchestrator | 2026-01-05 01:43:34.587806 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-01-05 01:43:34.587814 | orchestrator | Monday 05 January 2026 01:43:15 +0000 (0:00:00.824) 0:01:43.485 ******** 2026-01-05 01:43:34.587822 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:43:34.587831 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:43:34.587839 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:43:34.587848 | orchestrator | 2026-01-05 01:43:34.587857 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-01-05 01:43:34.587866 | orchestrator | Monday 05 January 2026 01:43:17 +0000 (0:00:02.247) 0:01:45.732 ******** 2026-01-05 01:43:34.587874 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:43:34.587882 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:43:34.587891 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:43:34.587899 | orchestrator | 2026-01-05 01:43:34.587908 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-01-05 01:43:34.587922 | orchestrator | Monday 05 January 2026 01:43:20 +0000 (0:00:02.431) 0:01:48.164 ******** 2026-01-05 01:43:34.587931 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:43:34.587940 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:43:34.587949 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:43:34.587958 | orchestrator | 2026-01-05 01:43:34.587966 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-01-05 01:43:34.587974 | orchestrator | Monday 05 January 2026 01:43:20 +0000 (0:00:00.523) 0:01:48.688 ******** 2026-01-05 01:43:34.587983 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-05 01:43:34.587991 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:43:34.588000 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-05 01:43:34.588008 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:43:34.588017 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-01-05 01:43:34.588026 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-01-05 01:43:34.588035 | orchestrator | 2026-01-05 01:43:34.588044 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-01-05 01:43:34.588057 | orchestrator | Monday 05 January 2026 01:43:28 +0000 (0:00:08.108) 0:01:56.796 ******** 2026-01-05 01:43:34.588066 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:43:34.588074 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:43:34.588082 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:43:34.588091 | orchestrator | 2026-01-05 01:43:34.588099 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-01-05 01:43:34.588108 | orchestrator | Monday 05 January 2026 01:43:29 +0000 (0:00:00.352) 0:01:57.149 ******** 2026-01-05 01:43:34.588116 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-01-05 01:43:34.588124 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:43:34.588133 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-05 01:43:34.588141 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:43:34.588149 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-05 01:43:34.588158 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:43:34.588166 | orchestrator | 2026-01-05 01:43:34.588175 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-01-05 01:43:34.588183 | orchestrator | Monday 05 January 2026 01:43:30 +0000 (0:00:01.166) 0:01:58.316 ******** 2026-01-05 01:43:34.588192 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:43:34.588200 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:43:34.588210 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:43:34.588218 | orchestrator | 2026-01-05 01:43:34.588226 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-01-05 01:43:34.588235 | orchestrator | Monday 05 January 2026 01:43:31 +0000 (0:00:00.524) 0:01:58.840 ******** 2026-01-05 01:43:34.588243 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:43:34.588252 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:43:34.588260 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:43:34.588269 | orchestrator | 2026-01-05 01:43:34.588277 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-01-05 01:43:34.588287 | orchestrator | Monday 05 January 2026 01:43:32 +0000 (0:00:01.024) 0:01:59.865 ******** 2026-01-05 01:43:34.588296 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:43:34.588304 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:43:34.588318 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:44:58.618524 | orchestrator | 2026-01-05 01:44:58.618710 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-01-05 01:44:58.618726 | orchestrator | Monday 05 January 2026 01:43:34 +0000 (0:00:02.505) 0:02:02.371 ******** 2026-01-05 01:44:58.618736 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:44:58.618747 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:44:58.618756 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:44:58.618766 | orchestrator | 2026-01-05 01:44:58.618798 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-01-05 01:44:58.618807 | orchestrator | Monday 05 January 2026 01:43:57 +0000 (0:00:22.827) 0:02:25.198 ******** 2026-01-05 01:44:58.618817 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:44:58.618825 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:44:58.618834 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:44:58.618843 | orchestrator | 2026-01-05 01:44:58.618851 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-01-05 01:44:58.618860 | orchestrator | Monday 05 January 2026 01:44:10 +0000 (0:00:13.491) 0:02:38.690 ******** 2026-01-05 01:44:58.618869 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:44:58.618878 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:44:58.618887 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:44:58.618895 | orchestrator | 2026-01-05 01:44:58.618904 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-01-05 01:44:58.618913 | orchestrator | Monday 05 January 2026 01:44:11 +0000 (0:00:01.087) 0:02:39.777 ******** 2026-01-05 01:44:58.618921 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:44:58.618930 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:44:58.618939 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:44:58.618948 | orchestrator | 2026-01-05 01:44:58.618956 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-01-05 01:44:58.618971 | orchestrator | Monday 05 January 2026 01:44:25 +0000 (0:00:13.142) 0:02:52.920 ******** 2026-01-05 01:44:58.618987 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:44:58.619002 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:44:58.619017 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:44:58.619032 | orchestrator | 2026-01-05 01:44:58.619046 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-01-05 01:44:58.619060 | orchestrator | Monday 05 January 2026 01:44:26 +0000 (0:00:01.140) 0:02:54.061 ******** 2026-01-05 01:44:58.619075 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:44:58.619090 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:44:58.619105 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:44:58.619120 | orchestrator | 2026-01-05 01:44:58.619135 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-01-05 01:44:58.619151 | orchestrator | 2026-01-05 01:44:58.619168 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-01-05 01:44:58.619184 | orchestrator | Monday 05 January 2026 01:44:26 +0000 (0:00:00.331) 0:02:54.393 ******** 2026-01-05 01:44:58.619199 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:44:58.619217 | orchestrator | 2026-01-05 01:44:58.619233 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-01-05 01:44:58.619248 | orchestrator | Monday 05 January 2026 01:44:27 +0000 (0:00:00.783) 0:02:55.176 ******** 2026-01-05 01:44:58.619270 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-01-05 01:44:58.619287 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-01-05 01:44:58.619302 | orchestrator | 2026-01-05 01:44:58.619316 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-01-05 01:44:58.619330 | orchestrator | Monday 05 January 2026 01:44:31 +0000 (0:00:03.632) 0:02:58.808 ******** 2026-01-05 01:44:58.619344 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-01-05 01:44:58.619379 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-01-05 01:44:58.619394 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-01-05 01:44:58.619409 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-01-05 01:44:58.619424 | orchestrator | 2026-01-05 01:44:58.619438 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-01-05 01:44:58.619471 | orchestrator | Monday 05 January 2026 01:44:38 +0000 (0:00:07.224) 0:03:06.033 ******** 2026-01-05 01:44:58.619487 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-05 01:44:58.619501 | orchestrator | 2026-01-05 01:44:58.619515 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-01-05 01:44:58.619531 | orchestrator | Monday 05 January 2026 01:44:41 +0000 (0:00:03.543) 0:03:09.576 ******** 2026-01-05 01:44:58.619546 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-05 01:44:58.619594 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-01-05 01:44:58.619610 | orchestrator | 2026-01-05 01:44:58.619625 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-01-05 01:44:58.619639 | orchestrator | Monday 05 January 2026 01:44:45 +0000 (0:00:04.001) 0:03:13.578 ******** 2026-01-05 01:44:58.619655 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-05 01:44:58.619669 | orchestrator | 2026-01-05 01:44:58.619683 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-01-05 01:44:58.619698 | orchestrator | Monday 05 January 2026 01:44:49 +0000 (0:00:03.461) 0:03:17.039 ******** 2026-01-05 01:44:58.619713 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-01-05 01:44:58.619722 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-01-05 01:44:58.619731 | orchestrator | 2026-01-05 01:44:58.619740 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-01-05 01:44:58.619770 | orchestrator | Monday 05 January 2026 01:44:57 +0000 (0:00:07.922) 0:03:24.962 ******** 2026-01-05 01:44:58.619786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-05 01:44:58.619800 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-05 01:44:58.619828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-05 01:44:58.619847 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 01:45:03.426683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 01:45:03.426755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 01:45:03.426762 | orchestrator | 2026-01-05 01:45:03.426769 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-01-05 01:45:03.426775 | orchestrator | Monday 05 January 2026 01:44:58 +0000 (0:00:01.437) 0:03:26.399 ******** 2026-01-05 01:45:03.426780 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:45:03.426785 | orchestrator | 2026-01-05 01:45:03.426790 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-01-05 01:45:03.426795 | orchestrator | Monday 05 January 2026 01:44:58 +0000 (0:00:00.153) 0:03:26.553 ******** 2026-01-05 01:45:03.426800 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:45:03.426804 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:45:03.426809 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:45:03.426840 | orchestrator | 2026-01-05 01:45:03.426848 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-01-05 01:45:03.426855 | orchestrator | Monday 05 January 2026 01:44:59 +0000 (0:00:00.321) 0:03:26.874 ******** 2026-01-05 01:45:03.426863 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 01:45:03.426871 | orchestrator | 2026-01-05 01:45:03.426878 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-01-05 01:45:03.426885 | orchestrator | Monday 05 January 2026 01:44:59 +0000 (0:00:00.727) 0:03:27.602 ******** 2026-01-05 01:45:03.426892 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:45:03.426900 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:45:03.426907 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:45:03.426914 | orchestrator | 2026-01-05 01:45:03.426921 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-01-05 01:45:03.426929 | orchestrator | Monday 05 January 2026 01:45:00 +0000 (0:00:00.570) 0:03:28.173 ******** 2026-01-05 01:45:03.426950 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:45:03.426957 | orchestrator | 2026-01-05 01:45:03.426962 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-01-05 01:45:03.426966 | orchestrator | Monday 05 January 2026 01:45:00 +0000 (0:00:00.612) 0:03:28.786 ******** 2026-01-05 01:45:03.426973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-05 01:45:03.426993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-05 01:45:03.426999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-05 01:45:03.427012 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 01:45:03.427017 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 01:45:03.427022 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 01:45:03.427027 | orchestrator | 2026-01-05 01:45:03.427035 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-01-05 01:45:05.152263 | orchestrator | Monday 05 January 2026 01:45:03 +0000 (0:00:02.419) 0:03:31.206 ******** 2026-01-05 01:45:05.152349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-05 01:45:05.152379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 01:45:05.152386 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:45:05.152592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-05 01:45:05.152600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 01:45:05.152604 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:45:05.152621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-05 01:45:05.152632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 01:45:05.152636 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:45:05.152640 | orchestrator | 2026-01-05 01:45:05.152645 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-01-05 01:45:05.152649 | orchestrator | Monday 05 January 2026 01:45:04 +0000 (0:00:00.903) 0:03:32.110 ******** 2026-01-05 01:45:05.152656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-05 01:45:05.152660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 01:45:05.152664 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:45:05.152673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-05 01:45:07.749411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 01:45:07.749498 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:45:07.749527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-05 01:45:07.749537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 01:45:07.749547 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:45:07.749600 | orchestrator | 2026-01-05 01:45:07.749622 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-01-05 01:45:07.749636 | orchestrator | Monday 05 January 2026 01:45:05 +0000 (0:00:00.831) 0:03:32.942 ******** 2026-01-05 01:45:07.749648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-05 01:45:07.749703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-05 01:45:07.749724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-05 01:45:07.749738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 01:45:07.749751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 01:45:07.749778 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 01:45:14.518365 | orchestrator | 2026-01-05 01:45:14.518463 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-01-05 01:45:14.518481 | orchestrator | Monday 05 January 2026 01:45:07 +0000 (0:00:02.590) 0:03:35.532 ******** 2026-01-05 01:45:14.518500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-05 01:45:14.518538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-05 01:45:14.518556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-05 01:45:14.518632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 01:45:14.518648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 01:45:14.518667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 01:45:14.518679 | orchestrator | 2026-01-05 01:45:14.518692 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-01-05 01:45:14.518703 | orchestrator | Monday 05 January 2026 01:45:13 +0000 (0:00:06.158) 0:03:41.690 ******** 2026-01-05 01:45:14.518715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-05 01:45:14.518736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 01:45:14.518749 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:45:14.518772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-05 01:45:18.905878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 01:45:18.905964 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:45:18.905997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-05 01:45:18.906066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 01:45:18.906080 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:45:18.906091 | orchestrator | 2026-01-05 01:45:18.906101 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-01-05 01:45:18.906113 | orchestrator | Monday 05 January 2026 01:45:14 +0000 (0:00:00.617) 0:03:42.308 ******** 2026-01-05 01:45:18.906123 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:45:18.906133 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:45:18.906141 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:45:18.906149 | orchestrator | 2026-01-05 01:45:18.906157 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-01-05 01:45:18.906166 | orchestrator | Monday 05 January 2026 01:45:16 +0000 (0:00:01.540) 0:03:43.849 ******** 2026-01-05 01:45:18.906174 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:45:18.906182 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:45:18.906190 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:45:18.906199 | orchestrator | 2026-01-05 01:45:18.906216 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-01-05 01:45:18.906225 | orchestrator | Monday 05 January 2026 01:45:16 +0000 (0:00:00.322) 0:03:44.172 ******** 2026-01-05 01:45:18.906254 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-05 01:45:18.906272 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-05 01:45:18.906290 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-05 01:45:18.906301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 01:45:18.906312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 01:45:18.906330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 01:45:56.753079 | orchestrator | 2026-01-05 01:45:56.753165 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-01-05 01:45:56.753188 | orchestrator | Monday 05 January 2026 01:45:18 +0000 (0:00:02.084) 0:03:46.256 ******** 2026-01-05 01:45:56.753195 | orchestrator | 2026-01-05 01:45:56.753201 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-01-05 01:45:56.753207 | orchestrator | Monday 05 January 2026 01:45:18 +0000 (0:00:00.145) 0:03:46.402 ******** 2026-01-05 01:45:56.753213 | orchestrator | 2026-01-05 01:45:56.753219 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-01-05 01:45:56.753242 | orchestrator | Monday 05 January 2026 01:45:18 +0000 (0:00:00.141) 0:03:46.543 ******** 2026-01-05 01:45:56.753248 | orchestrator | 2026-01-05 01:45:56.753254 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-01-05 01:45:56.753259 | orchestrator | Monday 05 January 2026 01:45:18 +0000 (0:00:00.142) 0:03:46.686 ******** 2026-01-05 01:45:56.753265 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:45:56.753273 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:45:56.753279 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:45:56.753284 | orchestrator | 2026-01-05 01:45:56.753290 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-01-05 01:45:56.753296 | orchestrator | Monday 05 January 2026 01:45:35 +0000 (0:00:16.764) 0:04:03.451 ******** 2026-01-05 01:45:56.753302 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:45:56.753308 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:45:56.753313 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:45:56.753319 | orchestrator | 2026-01-05 01:45:56.753325 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-01-05 01:45:56.753330 | orchestrator | 2026-01-05 01:45:56.753336 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-05 01:45:56.753342 | orchestrator | Monday 05 January 2026 01:45:43 +0000 (0:00:08.323) 0:04:11.775 ******** 2026-01-05 01:45:56.753349 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:45:56.753356 | orchestrator | 2026-01-05 01:45:56.753362 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-05 01:45:56.753368 | orchestrator | Monday 05 January 2026 01:45:45 +0000 (0:00:01.296) 0:04:13.072 ******** 2026-01-05 01:45:56.753374 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:45:56.753379 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:45:56.753385 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:45:56.753391 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:45:56.753396 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:45:56.753402 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:45:56.753408 | orchestrator | 2026-01-05 01:45:56.753413 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-01-05 01:45:56.753419 | orchestrator | Monday 05 January 2026 01:45:46 +0000 (0:00:00.828) 0:04:13.900 ******** 2026-01-05 01:45:56.753425 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:45:56.753431 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:45:56.753436 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:45:56.753442 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:45:56.753449 | orchestrator | 2026-01-05 01:45:56.753454 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-01-05 01:45:56.753460 | orchestrator | Monday 05 January 2026 01:45:46 +0000 (0:00:00.889) 0:04:14.790 ******** 2026-01-05 01:45:56.753467 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-01-05 01:45:56.753473 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-01-05 01:45:56.753479 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-01-05 01:45:56.753484 | orchestrator | 2026-01-05 01:45:56.753490 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-01-05 01:45:56.753496 | orchestrator | Monday 05 January 2026 01:45:47 +0000 (0:00:00.861) 0:04:15.652 ******** 2026-01-05 01:45:56.753502 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-01-05 01:45:56.753508 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-01-05 01:45:56.753514 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-01-05 01:45:56.753519 | orchestrator | 2026-01-05 01:45:56.753525 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-01-05 01:45:56.753531 | orchestrator | Monday 05 January 2026 01:45:49 +0000 (0:00:01.279) 0:04:16.931 ******** 2026-01-05 01:45:56.753537 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-01-05 01:45:56.753547 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:45:56.753553 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-01-05 01:45:56.753559 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:45:56.753564 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-01-05 01:45:56.753570 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:45:56.753576 | orchestrator | 2026-01-05 01:45:56.753582 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-01-05 01:45:56.753587 | orchestrator | Monday 05 January 2026 01:45:49 +0000 (0:00:00.563) 0:04:17.495 ******** 2026-01-05 01:45:56.753593 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-01-05 01:45:56.753599 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-01-05 01:45:56.753659 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-05 01:45:56.753672 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-05 01:45:56.753681 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:45:56.753692 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-05 01:45:56.753700 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-05 01:45:56.753710 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:45:56.753737 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-05 01:45:56.753748 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-05 01:45:56.753763 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:45:56.753772 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-01-05 01:45:56.753781 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-01-05 01:45:56.753790 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-01-05 01:45:56.753800 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-01-05 01:45:56.753810 | orchestrator | 2026-01-05 01:45:56.753819 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-01-05 01:45:56.753829 | orchestrator | Monday 05 January 2026 01:45:51 +0000 (0:00:02.061) 0:04:19.556 ******** 2026-01-05 01:45:56.753839 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:45:56.753848 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:45:56.753858 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:45:56.753866 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:45:56.753872 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:45:56.753877 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:45:56.753883 | orchestrator | 2026-01-05 01:45:56.753889 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-01-05 01:45:56.753895 | orchestrator | Monday 05 January 2026 01:45:52 +0000 (0:00:01.190) 0:04:20.746 ******** 2026-01-05 01:45:56.753900 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:45:56.753906 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:45:56.753912 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:45:56.753917 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:45:56.753923 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:45:56.753929 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:45:56.753935 | orchestrator | 2026-01-05 01:45:56.753940 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-01-05 01:45:56.753946 | orchestrator | Monday 05 January 2026 01:45:54 +0000 (0:00:01.800) 0:04:22.547 ******** 2026-01-05 01:45:56.753954 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-05 01:45:56.753970 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-05 01:45:56.753982 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-05 01:45:58.648689 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-05 01:45:58.648785 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-05 01:45:58.648798 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-05 01:45:58.648829 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-05 01:45:58.648841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-05 01:45:58.648850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-05 01:45:58.648881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-05 01:45:58.648891 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-05 01:45:58.648900 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-05 01:45:58.648915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 01:45:58.648924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 01:45:58.648931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 01:45:58.648940 | orchestrator | 2026-01-05 01:45:58.648949 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-05 01:45:58.648955 | orchestrator | Monday 05 January 2026 01:45:57 +0000 (0:00:02.425) 0:04:24.973 ******** 2026-01-05 01:45:58.648962 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:45:58.648968 | orchestrator | 2026-01-05 01:45:58.648974 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-01-05 01:45:58.648983 | orchestrator | Monday 05 January 2026 01:45:58 +0000 (0:00:01.462) 0:04:26.436 ******** 2026-01-05 01:46:01.929689 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-05 01:46:01.929843 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-05 01:46:01.929891 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-05 01:46:01.929906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-05 01:46:01.929919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-05 01:46:01.929959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-05 01:46:01.929973 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-05 01:46:01.930000 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-05 01:46:01.930069 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-05 01:46:01.930101 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 01:46:01.930121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 01:46:01.930143 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 01:46:01.930188 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-05 01:46:03.580204 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-05 01:46:03.580309 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-05 01:46:03.580325 | orchestrator | 2026-01-05 01:46:03.580338 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-01-05 01:46:03.580351 | orchestrator | Monday 05 January 2026 01:46:02 +0000 (0:00:03.554) 0:04:29.990 ******** 2026-01-05 01:46:03.580364 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-05 01:46:03.580377 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-05 01:46:03.580407 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-05 01:46:03.580441 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:46:03.580464 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-05 01:46:03.580484 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-05 01:46:03.580492 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-05 01:46:03.580499 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:46:03.580506 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-05 01:46:03.580516 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-05 01:46:03.580534 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-05 01:46:05.299457 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:46:05.299569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-05 01:46:05.299592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-05 01:46:05.299608 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:46:05.299688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-05 01:46:05.299702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-05 01:46:05.299715 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:46:05.299748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-05 01:46:05.299789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-05 01:46:05.299802 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:46:05.299815 | orchestrator | 2026-01-05 01:46:05.299829 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-01-05 01:46:05.299844 | orchestrator | Monday 05 January 2026 01:46:03 +0000 (0:00:01.769) 0:04:31.760 ******** 2026-01-05 01:46:05.299881 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-05 01:46:05.299898 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-05 01:46:05.299914 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-05 01:46:05.299927 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:46:05.299946 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-05 01:46:05.299970 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-05 01:46:05.299996 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-05 01:46:13.392136 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:46:13.392265 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-05 01:46:13.392291 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-05 01:46:13.392304 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-05 01:46:13.392338 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:46:13.392365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-05 01:46:13.392377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-05 01:46:13.392388 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:46:13.392415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-05 01:46:13.392426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-05 01:46:13.392436 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:46:13.392446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-05 01:46:13.392456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-05 01:46:13.392474 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:46:13.392484 | orchestrator | 2026-01-05 01:46:13.392495 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-05 01:46:13.392506 | orchestrator | Monday 05 January 2026 01:46:06 +0000 (0:00:02.085) 0:04:33.845 ******** 2026-01-05 01:46:13.392516 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:46:13.392526 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:46:13.392536 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:46:13.392546 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 01:46:13.392556 | orchestrator | 2026-01-05 01:46:13.392566 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-01-05 01:46:13.392575 | orchestrator | Monday 05 January 2026 01:46:06 +0000 (0:00:00.905) 0:04:34.751 ******** 2026-01-05 01:46:13.392585 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-05 01:46:13.392599 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-05 01:46:13.392610 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-05 01:46:13.392619 | orchestrator | 2026-01-05 01:46:13.392663 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-01-05 01:46:13.392675 | orchestrator | Monday 05 January 2026 01:46:08 +0000 (0:00:01.256) 0:04:36.007 ******** 2026-01-05 01:46:13.392686 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-05 01:46:13.392698 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-05 01:46:13.392709 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-05 01:46:13.392721 | orchestrator | 2026-01-05 01:46:13.392732 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-01-05 01:46:13.392743 | orchestrator | Monday 05 January 2026 01:46:09 +0000 (0:00:01.152) 0:04:37.159 ******** 2026-01-05 01:46:13.392755 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:46:13.392767 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:46:13.392778 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:46:13.392790 | orchestrator | 2026-01-05 01:46:13.392801 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-01-05 01:46:13.392812 | orchestrator | Monday 05 January 2026 01:46:09 +0000 (0:00:00.550) 0:04:37.710 ******** 2026-01-05 01:46:13.392824 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:46:13.392835 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:46:13.392847 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:46:13.392858 | orchestrator | 2026-01-05 01:46:13.392869 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-01-05 01:46:13.392881 | orchestrator | Monday 05 January 2026 01:46:10 +0000 (0:00:00.758) 0:04:38.468 ******** 2026-01-05 01:46:13.392893 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-01-05 01:46:13.392905 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-01-05 01:46:13.392916 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-01-05 01:46:13.392925 | orchestrator | 2026-01-05 01:46:13.392935 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-01-05 01:46:13.392944 | orchestrator | Monday 05 January 2026 01:46:12 +0000 (0:00:01.492) 0:04:39.960 ******** 2026-01-05 01:46:13.392961 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-01-05 01:46:32.366160 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-01-05 01:46:32.366258 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-01-05 01:46:32.366272 | orchestrator | 2026-01-05 01:46:32.366281 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-01-05 01:46:32.366290 | orchestrator | Monday 05 January 2026 01:46:13 +0000 (0:00:01.223) 0:04:41.183 ******** 2026-01-05 01:46:32.366297 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-01-05 01:46:32.366304 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-01-05 01:46:32.366335 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-01-05 01:46:32.366342 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-01-05 01:46:32.366348 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-01-05 01:46:32.366355 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-01-05 01:46:32.366361 | orchestrator | 2026-01-05 01:46:32.366368 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-01-05 01:46:32.366374 | orchestrator | Monday 05 January 2026 01:46:17 +0000 (0:00:03.924) 0:04:45.108 ******** 2026-01-05 01:46:32.366381 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:46:32.366388 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:46:32.366395 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:46:32.366401 | orchestrator | 2026-01-05 01:46:32.366407 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-01-05 01:46:32.366414 | orchestrator | Monday 05 January 2026 01:46:17 +0000 (0:00:00.532) 0:04:45.640 ******** 2026-01-05 01:46:32.366420 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:46:32.366426 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:46:32.366433 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:46:32.366439 | orchestrator | 2026-01-05 01:46:32.366445 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-01-05 01:46:32.366452 | orchestrator | Monday 05 January 2026 01:46:18 +0000 (0:00:00.341) 0:04:45.981 ******** 2026-01-05 01:46:32.366458 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:46:32.366464 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:46:32.366470 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:46:32.366477 | orchestrator | 2026-01-05 01:46:32.366483 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-01-05 01:46:32.366489 | orchestrator | Monday 05 January 2026 01:46:19 +0000 (0:00:01.300) 0:04:47.282 ******** 2026-01-05 01:46:32.366497 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-01-05 01:46:32.366506 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-01-05 01:46:32.366512 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-01-05 01:46:32.366519 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-01-05 01:46:32.366526 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-01-05 01:46:32.366532 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-01-05 01:46:32.366539 | orchestrator | 2026-01-05 01:46:32.366545 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-01-05 01:46:32.366564 | orchestrator | Monday 05 January 2026 01:46:22 +0000 (0:00:03.442) 0:04:50.724 ******** 2026-01-05 01:46:32.366571 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-05 01:46:32.366577 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-05 01:46:32.366584 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-05 01:46:32.366590 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-05 01:46:32.366597 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:46:32.366603 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-05 01:46:32.366609 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:46:32.366616 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-05 01:46:32.366622 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:46:32.366628 | orchestrator | 2026-01-05 01:46:32.366635 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-01-05 01:46:32.366703 | orchestrator | Monday 05 January 2026 01:46:26 +0000 (0:00:03.653) 0:04:54.378 ******** 2026-01-05 01:46:32.366710 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:46:32.366716 | orchestrator | 2026-01-05 01:46:32.366723 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-01-05 01:46:32.366729 | orchestrator | Monday 05 January 2026 01:46:26 +0000 (0:00:00.134) 0:04:54.512 ******** 2026-01-05 01:46:32.366736 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:46:32.366742 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:46:32.366749 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:46:32.366755 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:46:32.366762 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:46:32.366768 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:46:32.366775 | orchestrator | 2026-01-05 01:46:32.366781 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-01-05 01:46:32.366788 | orchestrator | Monday 05 January 2026 01:46:27 +0000 (0:00:00.640) 0:04:55.153 ******** 2026-01-05 01:46:32.366796 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-05 01:46:32.366803 | orchestrator | 2026-01-05 01:46:32.366809 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-01-05 01:46:32.366817 | orchestrator | Monday 05 January 2026 01:46:28 +0000 (0:00:00.721) 0:04:55.875 ******** 2026-01-05 01:46:32.366824 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:46:32.366849 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:46:32.366855 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:46:32.366860 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:46:32.366865 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:46:32.366869 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:46:32.366874 | orchestrator | 2026-01-05 01:46:32.366879 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-01-05 01:46:32.366884 | orchestrator | Monday 05 January 2026 01:46:28 +0000 (0:00:00.819) 0:04:56.695 ******** 2026-01-05 01:46:32.366891 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-05 01:46:32.366900 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-05 01:46:32.366909 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-05 01:46:32.366920 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-05 01:46:32.366932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-05 01:46:38.454765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-05 01:46:38.454894 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-05 01:46:38.454923 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-05 01:46:38.454935 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-05 01:46:38.454984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 01:46:38.454996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 01:46:38.455024 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 01:46:38.455036 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-05 01:46:38.455048 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-05 01:46:38.455058 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-05 01:46:38.455077 | orchestrator | 2026-01-05 01:46:38.455094 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-01-05 01:46:38.455105 | orchestrator | Monday 05 January 2026 01:46:32 +0000 (0:00:03.598) 0:05:00.293 ******** 2026-01-05 01:46:38.455116 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-05 01:46:38.455128 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-05 01:46:38.455146 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-05 01:46:38.898535 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-05 01:46:38.898723 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-05 01:46:38.898798 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-05 01:46:38.898818 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-05 01:46:38.898836 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-05 01:46:38.898876 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-05 01:46:38.898894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-05 01:46:38.898922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-05 01:46:38.898944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-05 01:46:38.898961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 01:46:38.898979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 01:46:38.898996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 01:46:38.899013 | orchestrator | 2026-01-05 01:46:38.899031 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-01-05 01:46:38.899058 | orchestrator | Monday 05 January 2026 01:46:38 +0000 (0:00:06.393) 0:05:06.687 ******** 2026-01-05 01:47:00.709687 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:47:00.710523 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:47:00.710558 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:47:00.710566 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:47:00.710599 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:47:00.710608 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:47:00.710617 | orchestrator | 2026-01-05 01:47:00.710627 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-01-05 01:47:00.710637 | orchestrator | Monday 05 January 2026 01:46:40 +0000 (0:00:01.567) 0:05:08.254 ******** 2026-01-05 01:47:00.710644 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-01-05 01:47:00.710651 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-01-05 01:47:00.710655 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-01-05 01:47:00.710660 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-01-05 01:47:00.710696 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-01-05 01:47:00.710701 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-01-05 01:47:00.710707 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:47:00.710711 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-01-05 01:47:00.710716 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-01-05 01:47:00.710721 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-01-05 01:47:00.710726 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:47:00.710730 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:47:00.710735 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-01-05 01:47:00.710752 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-01-05 01:47:00.710759 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-01-05 01:47:00.710766 | orchestrator | 2026-01-05 01:47:00.710773 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-01-05 01:47:00.710780 | orchestrator | Monday 05 January 2026 01:46:44 +0000 (0:00:03.737) 0:05:11.992 ******** 2026-01-05 01:47:00.710787 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:47:00.710794 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:47:00.710802 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:47:00.710809 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:47:00.710816 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:47:00.710823 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:47:00.710832 | orchestrator | 2026-01-05 01:47:00.710836 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-01-05 01:47:00.710841 | orchestrator | Monday 05 January 2026 01:46:44 +0000 (0:00:00.643) 0:05:12.635 ******** 2026-01-05 01:47:00.710845 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-01-05 01:47:00.710850 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-01-05 01:47:00.710855 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-01-05 01:47:00.710859 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-01-05 01:47:00.710864 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-01-05 01:47:00.710868 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-01-05 01:47:00.710872 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-01-05 01:47:00.710877 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-01-05 01:47:00.710886 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-01-05 01:47:00.710891 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-01-05 01:47:00.710895 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:47:00.710900 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-01-05 01:47:00.710904 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:47:00.710908 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-01-05 01:47:00.710913 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:47:00.710917 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-01-05 01:47:00.710921 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-01-05 01:47:00.710940 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-01-05 01:47:00.710945 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-01-05 01:47:00.710950 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-01-05 01:47:00.710954 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-01-05 01:47:00.710958 | orchestrator | 2026-01-05 01:47:00.710963 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-01-05 01:47:00.710967 | orchestrator | Monday 05 January 2026 01:46:50 +0000 (0:00:05.256) 0:05:17.892 ******** 2026-01-05 01:47:00.710972 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-01-05 01:47:00.710976 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-01-05 01:47:00.710981 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-01-05 01:47:00.710985 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-01-05 01:47:00.710989 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-05 01:47:00.710994 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-05 01:47:00.710998 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-01-05 01:47:00.711002 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-01-05 01:47:00.711006 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-05 01:47:00.711011 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-05 01:47:00.711019 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-05 01:47:00.711024 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-05 01:47:00.711028 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-01-05 01:47:00.711032 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:47:00.711037 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-01-05 01:47:00.711041 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:47:00.711045 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-01-05 01:47:00.711050 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:47:00.711057 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-05 01:47:00.711062 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-05 01:47:00.711066 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-05 01:47:00.711070 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-05 01:47:00.711075 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-05 01:47:00.711079 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-05 01:47:00.711083 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-05 01:47:00.711088 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-05 01:47:00.711092 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-05 01:47:00.711096 | orchestrator | 2026-01-05 01:47:00.711101 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-01-05 01:47:00.711105 | orchestrator | Monday 05 January 2026 01:46:57 +0000 (0:00:07.101) 0:05:24.993 ******** 2026-01-05 01:47:00.711109 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:47:00.711114 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:47:00.711118 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:47:00.711122 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:47:00.711127 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:47:00.711131 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:47:00.711135 | orchestrator | 2026-01-05 01:47:00.711140 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-01-05 01:47:00.711144 | orchestrator | Monday 05 January 2026 01:46:58 +0000 (0:00:00.846) 0:05:25.839 ******** 2026-01-05 01:47:00.711148 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:47:00.711152 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:47:00.711157 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:47:00.711161 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:47:00.711165 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:47:00.711170 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:47:00.711174 | orchestrator | 2026-01-05 01:47:00.711178 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-01-05 01:47:00.711183 | orchestrator | Monday 05 January 2026 01:46:58 +0000 (0:00:00.608) 0:05:26.448 ******** 2026-01-05 01:47:00.711187 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:47:00.711191 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:47:00.711196 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:47:00.711200 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:47:00.711204 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:47:00.711209 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:47:00.711213 | orchestrator | 2026-01-05 01:47:00.711221 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-01-05 01:47:01.982704 | orchestrator | Monday 05 January 2026 01:47:00 +0000 (0:00:02.041) 0:05:28.490 ******** 2026-01-05 01:47:01.982778 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-05 01:47:01.982830 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-05 01:47:01.982842 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-05 01:47:01.982849 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:47:01.982857 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-05 01:47:01.982864 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-05 01:47:01.982886 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-05 01:47:01.982894 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:47:01.982913 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-05 01:47:01.982927 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-05 01:47:01.982935 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-05 01:47:01.982941 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:47:01.982949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-05 01:47:01.982962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-05 01:47:05.845291 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:47:05.845386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-05 01:47:05.845425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-05 01:47:05.845436 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:47:05.845462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-05 01:47:05.845472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-05 01:47:05.845482 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:47:05.845492 | orchestrator | 2026-01-05 01:47:05.845502 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-01-05 01:47:05.845512 | orchestrator | Monday 05 January 2026 01:47:02 +0000 (0:00:01.917) 0:05:30.407 ******** 2026-01-05 01:47:05.845523 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-01-05 01:47:05.845533 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-01-05 01:47:05.845542 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:47:05.845552 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-01-05 01:47:05.845561 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-01-05 01:47:05.845570 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:47:05.845579 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-01-05 01:47:05.845588 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-01-05 01:47:05.845598 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:47:05.845607 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-01-05 01:47:05.845615 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-01-05 01:47:05.845624 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:47:05.845632 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-01-05 01:47:05.845642 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-01-05 01:47:05.845652 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:47:05.845661 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-01-05 01:47:05.845718 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-01-05 01:47:05.845729 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:47:05.845751 | orchestrator | 2026-01-05 01:47:05.845762 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-01-05 01:47:05.845772 | orchestrator | Monday 05 January 2026 01:47:03 +0000 (0:00:00.671) 0:05:31.078 ******** 2026-01-05 01:47:05.845804 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-05 01:47:05.845821 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-05 01:47:05.845828 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-05 01:47:05.845835 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-05 01:47:05.845844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-05 01:47:05.845862 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-05 01:47:57.695947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-05 01:47:57.696030 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-05 01:47:57.696058 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-05 01:47:57.696068 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-05 01:47:57.696076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 01:47:57.696104 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-05 01:47:57.696128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 01:47:57.696137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 01:47:57.696148 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-05 01:47:57.696156 | orchestrator | 2026-01-05 01:47:57.696166 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-05 01:47:57.696174 | orchestrator | Monday 05 January 2026 01:47:06 +0000 (0:00:03.177) 0:05:34.256 ******** 2026-01-05 01:47:57.696183 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:47:57.696189 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:47:57.696193 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:47:57.696197 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:47:57.696201 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:47:57.696206 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:47:57.696210 | orchestrator | 2026-01-05 01:47:57.696215 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-05 01:47:57.696219 | orchestrator | Monday 05 January 2026 01:47:07 +0000 (0:00:00.611) 0:05:34.867 ******** 2026-01-05 01:47:57.696223 | orchestrator | 2026-01-05 01:47:57.696228 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-05 01:47:57.696232 | orchestrator | Monday 05 January 2026 01:47:07 +0000 (0:00:00.141) 0:05:35.008 ******** 2026-01-05 01:47:57.696242 | orchestrator | 2026-01-05 01:47:57.696246 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-05 01:47:57.696250 | orchestrator | Monday 05 January 2026 01:47:07 +0000 (0:00:00.137) 0:05:35.145 ******** 2026-01-05 01:47:57.696254 | orchestrator | 2026-01-05 01:47:57.696259 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-05 01:47:57.696263 | orchestrator | Monday 05 January 2026 01:47:07 +0000 (0:00:00.321) 0:05:35.466 ******** 2026-01-05 01:47:57.696267 | orchestrator | 2026-01-05 01:47:57.696271 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-05 01:47:57.696276 | orchestrator | Monday 05 January 2026 01:47:07 +0000 (0:00:00.157) 0:05:35.623 ******** 2026-01-05 01:47:57.696280 | orchestrator | 2026-01-05 01:47:57.696284 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-05 01:47:57.696288 | orchestrator | Monday 05 January 2026 01:47:07 +0000 (0:00:00.137) 0:05:35.761 ******** 2026-01-05 01:47:57.696293 | orchestrator | 2026-01-05 01:47:57.696297 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-01-05 01:47:57.696301 | orchestrator | Monday 05 January 2026 01:47:08 +0000 (0:00:00.140) 0:05:35.902 ******** 2026-01-05 01:47:57.696306 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:47:57.696310 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:47:57.696314 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:47:57.696318 | orchestrator | 2026-01-05 01:47:57.696323 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-01-05 01:47:57.696327 | orchestrator | Monday 05 January 2026 01:47:20 +0000 (0:00:12.052) 0:05:47.954 ******** 2026-01-05 01:47:57.696331 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:47:57.696336 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:47:57.696340 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:47:57.696344 | orchestrator | 2026-01-05 01:47:57.696348 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-01-05 01:47:57.696353 | orchestrator | Monday 05 January 2026 01:47:38 +0000 (0:00:18.570) 0:06:06.525 ******** 2026-01-05 01:47:57.696357 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:47:57.696362 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:47:57.696366 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:47:57.696370 | orchestrator | 2026-01-05 01:47:57.696378 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-01-05 01:50:11.245862 | orchestrator | Monday 05 January 2026 01:47:57 +0000 (0:00:18.948) 0:06:25.474 ******** 2026-01-05 01:50:11.245974 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:50:11.245987 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:50:11.245997 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:50:11.246006 | orchestrator | 2026-01-05 01:50:11.246062 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-01-05 01:50:11.246073 | orchestrator | Monday 05 January 2026 01:48:29 +0000 (0:00:31.849) 0:06:57.323 ******** 2026-01-05 01:50:11.246081 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:50:11.246090 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2026-01-05 01:50:11.246100 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2026-01-05 01:50:11.246108 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:50:11.246116 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:50:11.246124 | orchestrator | 2026-01-05 01:50:11.246132 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-01-05 01:50:11.246141 | orchestrator | Monday 05 January 2026 01:48:35 +0000 (0:00:06.421) 0:07:03.745 ******** 2026-01-05 01:50:11.246148 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:50:11.246162 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:50:11.246175 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:50:11.246188 | orchestrator | 2026-01-05 01:50:11.246201 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-01-05 01:50:11.246240 | orchestrator | Monday 05 January 2026 01:48:37 +0000 (0:00:01.064) 0:07:04.810 ******** 2026-01-05 01:50:11.246254 | orchestrator | changed: [testbed-node-4] 2026-01-05 01:50:11.246266 | orchestrator | changed: [testbed-node-3] 2026-01-05 01:50:11.246279 | orchestrator | changed: [testbed-node-5] 2026-01-05 01:50:11.246292 | orchestrator | 2026-01-05 01:50:11.246322 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-01-05 01:50:11.246338 | orchestrator | Monday 05 January 2026 01:48:59 +0000 (0:00:22.077) 0:07:26.887 ******** 2026-01-05 01:50:11.246351 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:50:11.246365 | orchestrator | 2026-01-05 01:50:11.246379 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-01-05 01:50:11.246392 | orchestrator | Monday 05 January 2026 01:48:59 +0000 (0:00:00.137) 0:07:27.025 ******** 2026-01-05 01:50:11.246401 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:50:11.246411 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:50:11.246421 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:50:11.246429 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:50:11.246438 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:50:11.246448 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-01-05 01:50:11.246460 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-01-05 01:50:11.246469 | orchestrator | 2026-01-05 01:50:11.246479 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-01-05 01:50:11.246488 | orchestrator | Monday 05 January 2026 01:49:22 +0000 (0:00:23.719) 0:07:50.744 ******** 2026-01-05 01:50:11.246497 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:50:11.246507 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:50:11.246516 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:50:11.246525 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:50:11.246533 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:50:11.246541 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:50:11.246549 | orchestrator | 2026-01-05 01:50:11.246557 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-01-05 01:50:11.246565 | orchestrator | Monday 05 January 2026 01:49:31 +0000 (0:00:08.836) 0:07:59.580 ******** 2026-01-05 01:50:11.246573 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:50:11.246580 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:50:11.246588 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:50:11.246596 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:50:11.246604 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:50:11.246612 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-5 2026-01-05 01:50:11.246619 | orchestrator | 2026-01-05 01:50:11.246627 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-01-05 01:50:11.246635 | orchestrator | Monday 05 January 2026 01:49:35 +0000 (0:00:04.162) 0:08:03.743 ******** 2026-01-05 01:50:11.246643 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-01-05 01:50:11.246651 | orchestrator | 2026-01-05 01:50:11.246659 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-01-05 01:50:11.246667 | orchestrator | Monday 05 January 2026 01:49:49 +0000 (0:00:13.800) 0:08:17.543 ******** 2026-01-05 01:50:11.246674 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-01-05 01:50:11.246682 | orchestrator | 2026-01-05 01:50:11.246690 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-01-05 01:50:11.246698 | orchestrator | Monday 05 January 2026 01:49:51 +0000 (0:00:01.376) 0:08:18.919 ******** 2026-01-05 01:50:11.246706 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:50:11.246714 | orchestrator | 2026-01-05 01:50:11.246722 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-01-05 01:50:11.246729 | orchestrator | Monday 05 January 2026 01:49:52 +0000 (0:00:01.413) 0:08:20.333 ******** 2026-01-05 01:50:11.246746 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-01-05 01:50:11.246754 | orchestrator | 2026-01-05 01:50:11.246762 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-01-05 01:50:11.246769 | orchestrator | Monday 05 January 2026 01:50:05 +0000 (0:00:12.743) 0:08:33.077 ******** 2026-01-05 01:50:11.246777 | orchestrator | ok: [testbed-node-4] 2026-01-05 01:50:11.246787 | orchestrator | ok: [testbed-node-3] 2026-01-05 01:50:11.246794 | orchestrator | ok: [testbed-node-5] 2026-01-05 01:50:11.246802 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:50:11.246810 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:50:11.246836 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:50:11.246844 | orchestrator | 2026-01-05 01:50:11.246872 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-01-05 01:50:11.246880 | orchestrator | 2026-01-05 01:50:11.246888 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-01-05 01:50:11.246896 | orchestrator | Monday 05 January 2026 01:50:07 +0000 (0:00:01.858) 0:08:34.936 ******** 2026-01-05 01:50:11.246904 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:50:11.246912 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:50:11.246920 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:50:11.246928 | orchestrator | 2026-01-05 01:50:11.246936 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-01-05 01:50:11.246944 | orchestrator | 2026-01-05 01:50:11.246952 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-01-05 01:50:11.246959 | orchestrator | Monday 05 January 2026 01:50:08 +0000 (0:00:01.248) 0:08:36.184 ******** 2026-01-05 01:50:11.246967 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:50:11.246975 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:50:11.246983 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:50:11.246991 | orchestrator | 2026-01-05 01:50:11.246998 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-01-05 01:50:11.247006 | orchestrator | 2026-01-05 01:50:11.247014 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-01-05 01:50:11.247022 | orchestrator | Monday 05 January 2026 01:50:08 +0000 (0:00:00.555) 0:08:36.740 ******** 2026-01-05 01:50:11.247030 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-01-05 01:50:11.247037 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-01-05 01:50:11.247045 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-01-05 01:50:11.247053 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-01-05 01:50:11.247067 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-01-05 01:50:11.247075 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-01-05 01:50:11.247083 | orchestrator | skipping: [testbed-node-3] 2026-01-05 01:50:11.247091 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-01-05 01:50:11.247099 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-01-05 01:50:11.247107 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-01-05 01:50:11.247114 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-01-05 01:50:11.247122 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-01-05 01:50:11.247130 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-01-05 01:50:11.247138 | orchestrator | skipping: [testbed-node-4] 2026-01-05 01:50:11.247146 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-01-05 01:50:11.247153 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-01-05 01:50:11.247161 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-01-05 01:50:11.247169 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-01-05 01:50:11.247177 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-01-05 01:50:11.247185 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-01-05 01:50:11.247198 | orchestrator | skipping: [testbed-node-5] 2026-01-05 01:50:11.247206 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-01-05 01:50:11.247214 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-01-05 01:50:11.247222 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-01-05 01:50:11.247229 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-01-05 01:50:11.247237 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-01-05 01:50:11.247245 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-01-05 01:50:11.247253 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:50:11.247260 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-01-05 01:50:11.247268 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-01-05 01:50:11.247276 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-01-05 01:50:11.247284 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-01-05 01:50:11.247292 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-01-05 01:50:11.247299 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-01-05 01:50:11.247307 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:50:11.247315 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-01-05 01:50:11.247323 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-01-05 01:50:11.247331 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-01-05 01:50:11.247338 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-01-05 01:50:11.247346 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-01-05 01:50:11.247354 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-01-05 01:50:11.247362 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:50:11.247373 | orchestrator | 2026-01-05 01:50:11.247386 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-01-05 01:50:11.247399 | orchestrator | 2026-01-05 01:50:11.247413 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-01-05 01:50:11.247427 | orchestrator | Monday 05 January 2026 01:50:10 +0000 (0:00:01.477) 0:08:38.218 ******** 2026-01-05 01:50:11.247440 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-01-05 01:50:11.247453 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-01-05 01:50:11.247466 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:50:11.247480 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-01-05 01:50:11.247493 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-01-05 01:50:11.247515 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:50:12.884170 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-01-05 01:50:12.884284 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-01-05 01:50:12.884299 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:50:12.884311 | orchestrator | 2026-01-05 01:50:12.884324 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-01-05 01:50:12.884336 | orchestrator | 2026-01-05 01:50:12.884348 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-01-05 01:50:12.884359 | orchestrator | Monday 05 January 2026 01:50:11 +0000 (0:00:00.817) 0:08:39.035 ******** 2026-01-05 01:50:12.884370 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:50:12.884380 | orchestrator | 2026-01-05 01:50:12.884392 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-01-05 01:50:12.884403 | orchestrator | 2026-01-05 01:50:12.884413 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-01-05 01:50:12.884425 | orchestrator | Monday 05 January 2026 01:50:11 +0000 (0:00:00.701) 0:08:39.736 ******** 2026-01-05 01:50:12.884436 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:50:12.884447 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:50:12.884494 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:50:12.884510 | orchestrator | 2026-01-05 01:50:12.884523 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:50:12.884534 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 01:50:12.884549 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2026-01-05 01:50:12.884589 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-01-05 01:50:12.884601 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-01-05 01:50:12.884612 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-01-05 01:50:12.884622 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-01-05 01:50:12.884633 | orchestrator | testbed-node-5 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-01-05 01:50:12.884644 | orchestrator | 2026-01-05 01:50:12.884654 | orchestrator | 2026-01-05 01:50:12.884665 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:50:12.884675 | orchestrator | Monday 05 January 2026 01:50:12 +0000 (0:00:00.462) 0:08:40.199 ******** 2026-01-05 01:50:12.884687 | orchestrator | =============================================================================== 2026-01-05 01:50:12.884698 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 34.14s 2026-01-05 01:50:12.884709 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 31.85s 2026-01-05 01:50:12.884720 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 23.72s 2026-01-05 01:50:12.884731 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 22.83s 2026-01-05 01:50:12.884742 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 22.08s 2026-01-05 01:50:12.884753 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 19.46s 2026-01-05 01:50:12.884763 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 18.95s 2026-01-05 01:50:12.884775 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 18.57s 2026-01-05 01:50:12.884785 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 16.77s 2026-01-05 01:50:12.884797 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 16.29s 2026-01-05 01:50:12.884808 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.80s 2026-01-05 01:50:12.884882 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.49s 2026-01-05 01:50:12.884893 | orchestrator | nova-cell : Create cell ------------------------------------------------ 13.14s 2026-01-05 01:50:12.884904 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.85s 2026-01-05 01:50:12.884915 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 12.74s 2026-01-05 01:50:12.884928 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 12.05s 2026-01-05 01:50:12.884940 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 8.84s 2026-01-05 01:50:12.884952 | orchestrator | nova : Restart nova-api container --------------------------------------- 8.32s 2026-01-05 01:50:12.884965 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.11s 2026-01-05 01:50:12.884978 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.92s 2026-01-05 01:50:15.329893 | orchestrator | 2026-01-05 01:50:15 | INFO  | Task ed1eac22-e274-4e59-81c0-d77d673486e4 (horizon) was prepared for execution. 2026-01-05 01:50:15.329928 | orchestrator | 2026-01-05 01:50:15 | INFO  | It takes a moment until task ed1eac22-e274-4e59-81c0-d77d673486e4 (horizon) has been started and output is visible here. 2026-01-05 01:50:22.745012 | orchestrator | 2026-01-05 01:50:22.745073 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 01:50:22.745080 | orchestrator | 2026-01-05 01:50:22.745085 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 01:50:22.745090 | orchestrator | Monday 05 January 2026 01:50:19 +0000 (0:00:00.274) 0:00:00.274 ******** 2026-01-05 01:50:22.745094 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:50:22.745100 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:50:22.745104 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:50:22.745108 | orchestrator | 2026-01-05 01:50:22.745112 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 01:50:22.745116 | orchestrator | Monday 05 January 2026 01:50:19 +0000 (0:00:00.319) 0:00:00.594 ******** 2026-01-05 01:50:22.745120 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-01-05 01:50:22.745125 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-01-05 01:50:22.745129 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-01-05 01:50:22.745133 | orchestrator | 2026-01-05 01:50:22.745137 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-01-05 01:50:22.745141 | orchestrator | 2026-01-05 01:50:22.745145 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-05 01:50:22.745149 | orchestrator | Monday 05 January 2026 01:50:20 +0000 (0:00:00.445) 0:00:01.039 ******** 2026-01-05 01:50:22.745153 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:50:22.745159 | orchestrator | 2026-01-05 01:50:22.745163 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-01-05 01:50:22.745178 | orchestrator | Monday 05 January 2026 01:50:20 +0000 (0:00:00.553) 0:00:01.593 ******** 2026-01-05 01:50:22.745186 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-05 01:50:22.745237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-05 01:50:22.745243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-05 01:50:22.745252 | orchestrator | 2026-01-05 01:50:22.745256 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-01-05 01:50:22.745260 | orchestrator | Monday 05 January 2026 01:50:22 +0000 (0:00:01.280) 0:00:02.874 ******** 2026-01-05 01:50:22.745264 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:50:22.745268 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:50:22.745272 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:50:22.745276 | orchestrator | 2026-01-05 01:50:22.745280 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-05 01:50:22.745284 | orchestrator | Monday 05 January 2026 01:50:22 +0000 (0:00:00.493) 0:00:03.368 ******** 2026-01-05 01:50:22.745290 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-05 01:50:28.955416 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-05 01:50:28.955512 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-01-05 01:50:28.955523 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-01-05 01:50:28.955531 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-01-05 01:50:28.955537 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-01-05 01:50:28.955543 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-01-05 01:50:28.955549 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-01-05 01:50:28.955556 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-05 01:50:28.955562 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-05 01:50:28.955571 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-01-05 01:50:28.955578 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-01-05 01:50:28.955584 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-01-05 01:50:28.955607 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-01-05 01:50:28.955614 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-01-05 01:50:28.955619 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-01-05 01:50:28.955625 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-05 01:50:28.955631 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-05 01:50:28.955638 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-01-05 01:50:28.955643 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-01-05 01:50:28.955649 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-01-05 01:50:28.955654 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-01-05 01:50:28.955660 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-01-05 01:50:28.955668 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-01-05 01:50:28.955691 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-01-05 01:50:28.955698 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-01-05 01:50:28.955702 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-01-05 01:50:28.955706 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-01-05 01:50:28.955710 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-01-05 01:50:28.955713 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-01-05 01:50:28.955717 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-01-05 01:50:28.955721 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-01-05 01:50:28.955725 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-01-05 01:50:28.955730 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-01-05 01:50:28.955734 | orchestrator | 2026-01-05 01:50:28.955739 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-05 01:50:28.955744 | orchestrator | Monday 05 January 2026 01:50:23 +0000 (0:00:00.777) 0:00:04.145 ******** 2026-01-05 01:50:28.955747 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:50:28.955753 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:50:28.955757 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:50:28.955760 | orchestrator | 2026-01-05 01:50:28.955764 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-05 01:50:28.955769 | orchestrator | Monday 05 January 2026 01:50:23 +0000 (0:00:00.340) 0:00:04.486 ******** 2026-01-05 01:50:28.955773 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:50:28.955778 | orchestrator | 2026-01-05 01:50:28.955795 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-05 01:50:28.955799 | orchestrator | Monday 05 January 2026 01:50:24 +0000 (0:00:00.352) 0:00:04.838 ******** 2026-01-05 01:50:28.955803 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:50:28.955807 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:50:28.955810 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:50:28.955814 | orchestrator | 2026-01-05 01:50:28.955818 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-05 01:50:28.955822 | orchestrator | Monday 05 January 2026 01:50:24 +0000 (0:00:00.299) 0:00:05.138 ******** 2026-01-05 01:50:28.955860 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:50:28.955866 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:50:28.955870 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:50:28.955873 | orchestrator | 2026-01-05 01:50:28.955877 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-05 01:50:28.955881 | orchestrator | Monday 05 January 2026 01:50:24 +0000 (0:00:00.329) 0:00:05.467 ******** 2026-01-05 01:50:28.955885 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:50:28.955888 | orchestrator | 2026-01-05 01:50:28.955892 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-05 01:50:28.955896 | orchestrator | Monday 05 January 2026 01:50:24 +0000 (0:00:00.128) 0:00:05.596 ******** 2026-01-05 01:50:28.955904 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:50:28.955908 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:50:28.955914 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:50:28.955920 | orchestrator | 2026-01-05 01:50:28.955926 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-05 01:50:28.955939 | orchestrator | Monday 05 January 2026 01:50:25 +0000 (0:00:00.291) 0:00:05.888 ******** 2026-01-05 01:50:28.955945 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:50:28.955951 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:50:28.955957 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:50:28.955963 | orchestrator | 2026-01-05 01:50:28.955970 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-05 01:50:28.955976 | orchestrator | Monday 05 January 2026 01:50:25 +0000 (0:00:00.554) 0:00:06.442 ******** 2026-01-05 01:50:28.955982 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:50:28.955989 | orchestrator | 2026-01-05 01:50:28.955995 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-05 01:50:28.956001 | orchestrator | Monday 05 January 2026 01:50:25 +0000 (0:00:00.128) 0:00:06.571 ******** 2026-01-05 01:50:28.956008 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:50:28.956014 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:50:28.956020 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:50:28.956026 | orchestrator | 2026-01-05 01:50:28.956035 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-05 01:50:28.956041 | orchestrator | Monday 05 January 2026 01:50:26 +0000 (0:00:00.328) 0:00:06.900 ******** 2026-01-05 01:50:28.956048 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:50:28.956054 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:50:28.956060 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:50:28.956066 | orchestrator | 2026-01-05 01:50:28.956072 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-05 01:50:28.956079 | orchestrator | Monday 05 January 2026 01:50:26 +0000 (0:00:00.345) 0:00:07.245 ******** 2026-01-05 01:50:28.956085 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:50:28.956092 | orchestrator | 2026-01-05 01:50:28.956100 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-05 01:50:28.956107 | orchestrator | Monday 05 January 2026 01:50:26 +0000 (0:00:00.134) 0:00:07.380 ******** 2026-01-05 01:50:28.956115 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:50:28.956121 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:50:28.956130 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:50:28.956137 | orchestrator | 2026-01-05 01:50:28.956145 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-05 01:50:28.956152 | orchestrator | Monday 05 January 2026 01:50:27 +0000 (0:00:00.544) 0:00:07.925 ******** 2026-01-05 01:50:28.956159 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:50:28.956165 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:50:28.956172 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:50:28.956179 | orchestrator | 2026-01-05 01:50:28.956185 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-05 01:50:28.956192 | orchestrator | Monday 05 January 2026 01:50:27 +0000 (0:00:00.349) 0:00:08.274 ******** 2026-01-05 01:50:28.956199 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:50:28.956207 | orchestrator | 2026-01-05 01:50:28.956215 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-05 01:50:28.956223 | orchestrator | Monday 05 January 2026 01:50:27 +0000 (0:00:00.142) 0:00:08.417 ******** 2026-01-05 01:50:28.956229 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:50:28.956237 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:50:28.956244 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:50:28.956252 | orchestrator | 2026-01-05 01:50:28.956259 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-05 01:50:28.956266 | orchestrator | Monday 05 January 2026 01:50:27 +0000 (0:00:00.300) 0:00:08.717 ******** 2026-01-05 01:50:28.956349 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:50:28.956359 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:50:28.956365 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:50:28.956371 | orchestrator | 2026-01-05 01:50:28.956377 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-05 01:50:28.956383 | orchestrator | Monday 05 January 2026 01:50:28 +0000 (0:00:00.305) 0:00:09.022 ******** 2026-01-05 01:50:28.956389 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:50:28.956395 | orchestrator | 2026-01-05 01:50:28.956401 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-05 01:50:28.956407 | orchestrator | Monday 05 January 2026 01:50:28 +0000 (0:00:00.130) 0:00:09.153 ******** 2026-01-05 01:50:28.956413 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:50:28.956419 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:50:28.956425 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:50:28.956431 | orchestrator | 2026-01-05 01:50:28.956437 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-05 01:50:28.956455 | orchestrator | Monday 05 January 2026 01:50:28 +0000 (0:00:00.540) 0:00:09.693 ******** 2026-01-05 01:50:43.181635 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:50:43.181724 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:50:43.181733 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:50:43.181740 | orchestrator | 2026-01-05 01:50:43.181748 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-05 01:50:43.181755 | orchestrator | Monday 05 January 2026 01:50:29 +0000 (0:00:00.346) 0:00:10.040 ******** 2026-01-05 01:50:43.181762 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:50:43.181771 | orchestrator | 2026-01-05 01:50:43.181781 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-05 01:50:43.181795 | orchestrator | Monday 05 January 2026 01:50:29 +0000 (0:00:00.155) 0:00:10.195 ******** 2026-01-05 01:50:43.181809 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:50:43.181818 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:50:43.181829 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:50:43.181940 | orchestrator | 2026-01-05 01:50:43.181953 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-05 01:50:43.181960 | orchestrator | Monday 05 January 2026 01:50:29 +0000 (0:00:00.298) 0:00:10.494 ******** 2026-01-05 01:50:43.181966 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:50:43.181972 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:50:43.181978 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:50:43.181984 | orchestrator | 2026-01-05 01:50:43.181990 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-05 01:50:43.181997 | orchestrator | Monday 05 January 2026 01:50:30 +0000 (0:00:00.546) 0:00:11.040 ******** 2026-01-05 01:50:43.182003 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:50:43.182009 | orchestrator | 2026-01-05 01:50:43.182083 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-05 01:50:43.182093 | orchestrator | Monday 05 January 2026 01:50:30 +0000 (0:00:00.145) 0:00:11.186 ******** 2026-01-05 01:50:43.182103 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:50:43.182112 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:50:43.182122 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:50:43.182131 | orchestrator | 2026-01-05 01:50:43.182139 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-05 01:50:43.182148 | orchestrator | Monday 05 January 2026 01:50:30 +0000 (0:00:00.317) 0:00:11.503 ******** 2026-01-05 01:50:43.182156 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:50:43.182165 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:50:43.182175 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:50:43.182183 | orchestrator | 2026-01-05 01:50:43.182193 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-05 01:50:43.182203 | orchestrator | Monday 05 January 2026 01:50:31 +0000 (0:00:00.348) 0:00:11.852 ******** 2026-01-05 01:50:43.182212 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:50:43.182243 | orchestrator | 2026-01-05 01:50:43.182254 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-05 01:50:43.182264 | orchestrator | Monday 05 January 2026 01:50:31 +0000 (0:00:00.132) 0:00:11.985 ******** 2026-01-05 01:50:43.182275 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:50:43.182285 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:50:43.182295 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:50:43.182304 | orchestrator | 2026-01-05 01:50:43.182312 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-05 01:50:43.182318 | orchestrator | Monday 05 January 2026 01:50:31 +0000 (0:00:00.522) 0:00:12.507 ******** 2026-01-05 01:50:43.182325 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:50:43.182333 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:50:43.182339 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:50:43.182346 | orchestrator | 2026-01-05 01:50:43.182353 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-05 01:50:43.182360 | orchestrator | Monday 05 January 2026 01:50:32 +0000 (0:00:00.325) 0:00:12.833 ******** 2026-01-05 01:50:43.182367 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:50:43.182374 | orchestrator | 2026-01-05 01:50:43.182380 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-05 01:50:43.182388 | orchestrator | Monday 05 January 2026 01:50:32 +0000 (0:00:00.132) 0:00:12.965 ******** 2026-01-05 01:50:43.182395 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:50:43.182401 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:50:43.182408 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:50:43.182415 | orchestrator | 2026-01-05 01:50:43.182425 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-01-05 01:50:43.182434 | orchestrator | Monday 05 January 2026 01:50:32 +0000 (0:00:00.329) 0:00:13.294 ******** 2026-01-05 01:50:43.182444 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:50:43.182457 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:50:43.182471 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:50:43.182479 | orchestrator | 2026-01-05 01:50:43.182488 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-01-05 01:50:43.182496 | orchestrator | Monday 05 January 2026 01:50:34 +0000 (0:00:01.756) 0:00:15.050 ******** 2026-01-05 01:50:43.182505 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-05 01:50:43.182516 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-05 01:50:43.182525 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-05 01:50:43.182534 | orchestrator | 2026-01-05 01:50:43.182542 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-01-05 01:50:43.182551 | orchestrator | Monday 05 January 2026 01:50:36 +0000 (0:00:02.034) 0:00:17.085 ******** 2026-01-05 01:50:43.182561 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-05 01:50:43.182573 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-05 01:50:43.182583 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-05 01:50:43.182593 | orchestrator | 2026-01-05 01:50:43.182603 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-01-05 01:50:43.182631 | orchestrator | Monday 05 January 2026 01:50:38 +0000 (0:00:01.949) 0:00:19.035 ******** 2026-01-05 01:50:43.182638 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-05 01:50:43.182644 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-05 01:50:43.182650 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-05 01:50:43.182663 | orchestrator | 2026-01-05 01:50:43.182669 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-01-05 01:50:43.182674 | orchestrator | Monday 05 January 2026 01:50:39 +0000 (0:00:01.653) 0:00:20.688 ******** 2026-01-05 01:50:43.182680 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:50:43.182686 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:50:43.182691 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:50:43.182697 | orchestrator | 2026-01-05 01:50:43.182703 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-01-05 01:50:43.182709 | orchestrator | Monday 05 January 2026 01:50:40 +0000 (0:00:00.328) 0:00:21.016 ******** 2026-01-05 01:50:43.182715 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:50:43.182720 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:50:43.182726 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:50:43.182732 | orchestrator | 2026-01-05 01:50:43.182738 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-05 01:50:43.182749 | orchestrator | Monday 05 January 2026 01:50:40 +0000 (0:00:00.512) 0:00:21.529 ******** 2026-01-05 01:50:43.182755 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:50:43.182762 | orchestrator | 2026-01-05 01:50:43.182767 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-01-05 01:50:43.182773 | orchestrator | Monday 05 January 2026 01:50:41 +0000 (0:00:00.631) 0:00:22.161 ******** 2026-01-05 01:50:43.182784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-05 01:50:43.182805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-05 01:50:44.088739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-05 01:50:44.088830 | orchestrator | 2026-01-05 01:50:44.088874 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-01-05 01:50:44.088898 | orchestrator | Monday 05 January 2026 01:50:43 +0000 (0:00:01.758) 0:00:23.920 ******** 2026-01-05 01:50:44.088917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-05 01:50:44.088923 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:50:44.088929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-05 01:50:44.088937 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:50:44.088976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-05 01:50:46.398784 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:50:46.398928 | orchestrator | 2026-01-05 01:50:46.398945 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-01-05 01:50:46.398957 | orchestrator | Monday 05 January 2026 01:50:44 +0000 (0:00:00.905) 0:00:24.826 ******** 2026-01-05 01:50:46.398973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-05 01:50:46.399014 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:50:46.399061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-05 01:50:46.399075 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:50:46.399086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-05 01:50:46.399103 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:50:46.399113 | orchestrator | 2026-01-05 01:50:46.399137 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-01-05 01:50:46.399147 | orchestrator | Monday 05 January 2026 01:50:44 +0000 (0:00:00.849) 0:00:25.676 ******** 2026-01-05 01:50:46.399166 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-05 01:51:29.768548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-05 01:51:29.769498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-05 01:51:29.769571 | orchestrator | 2026-01-05 01:51:29.769582 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-05 01:51:29.769592 | orchestrator | Monday 05 January 2026 01:50:46 +0000 (0:00:01.464) 0:00:27.140 ******** 2026-01-05 01:51:29.769599 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:51:29.769608 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:51:29.769614 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:51:29.769621 | orchestrator | 2026-01-05 01:51:29.769628 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-05 01:51:29.769635 | orchestrator | Monday 05 January 2026 01:50:46 +0000 (0:00:00.539) 0:00:27.679 ******** 2026-01-05 01:51:29.769643 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:51:29.769651 | orchestrator | 2026-01-05 01:51:29.769658 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-01-05 01:51:29.769665 | orchestrator | Monday 05 January 2026 01:50:47 +0000 (0:00:00.603) 0:00:28.283 ******** 2026-01-05 01:51:29.769672 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:51:29.769678 | orchestrator | 2026-01-05 01:51:29.769684 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-01-05 01:51:29.769691 | orchestrator | Monday 05 January 2026 01:50:49 +0000 (0:00:02.420) 0:00:30.703 ******** 2026-01-05 01:51:29.769698 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:51:29.769705 | orchestrator | 2026-01-05 01:51:29.769712 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-01-05 01:51:29.769719 | orchestrator | Monday 05 January 2026 01:50:52 +0000 (0:00:02.437) 0:00:33.141 ******** 2026-01-05 01:51:29.769726 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:51:29.769733 | orchestrator | 2026-01-05 01:51:29.769740 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-05 01:51:29.769747 | orchestrator | Monday 05 January 2026 01:51:09 +0000 (0:00:16.895) 0:00:50.037 ******** 2026-01-05 01:51:29.769754 | orchestrator | 2026-01-05 01:51:29.769761 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-05 01:51:29.769768 | orchestrator | Monday 05 January 2026 01:51:09 +0000 (0:00:00.292) 0:00:50.329 ******** 2026-01-05 01:51:29.769775 | orchestrator | 2026-01-05 01:51:29.769783 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-05 01:51:29.769804 | orchestrator | Monday 05 January 2026 01:51:09 +0000 (0:00:00.071) 0:00:50.401 ******** 2026-01-05 01:51:29.769812 | orchestrator | 2026-01-05 01:51:29.769819 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-01-05 01:51:29.769826 | orchestrator | Monday 05 January 2026 01:51:09 +0000 (0:00:00.076) 0:00:50.478 ******** 2026-01-05 01:51:29.769832 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:51:29.769839 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:51:29.769845 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:51:29.769852 | orchestrator | 2026-01-05 01:51:29.769859 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:51:29.769886 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-05 01:51:29.769896 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-01-05 01:51:29.769903 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-01-05 01:51:29.769917 | orchestrator | 2026-01-05 01:51:29.769924 | orchestrator | 2026-01-05 01:51:29.769930 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:51:29.769937 | orchestrator | Monday 05 January 2026 01:51:29 +0000 (0:00:20.006) 0:01:10.484 ******** 2026-01-05 01:51:29.769944 | orchestrator | =============================================================================== 2026-01-05 01:51:29.769951 | orchestrator | horizon : Restart horizon container ------------------------------------ 20.01s 2026-01-05 01:51:29.769957 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 16.90s 2026-01-05 01:51:29.769964 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.44s 2026-01-05 01:51:29.769970 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.42s 2026-01-05 01:51:29.769976 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.03s 2026-01-05 01:51:29.769981 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 1.95s 2026-01-05 01:51:29.769988 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.76s 2026-01-05 01:51:29.769994 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.76s 2026-01-05 01:51:29.770001 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.65s 2026-01-05 01:51:29.770008 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.46s 2026-01-05 01:51:29.770069 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.28s 2026-01-05 01:51:29.770076 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.91s 2026-01-05 01:51:29.770083 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.85s 2026-01-05 01:51:29.770099 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.78s 2026-01-05 01:51:30.137584 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.63s 2026-01-05 01:51:30.137703 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.60s 2026-01-05 01:51:30.137714 | orchestrator | horizon : Update policy file name --------------------------------------- 0.55s 2026-01-05 01:51:30.137722 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.55s 2026-01-05 01:51:30.137729 | orchestrator | horizon : Update policy file name --------------------------------------- 0.55s 2026-01-05 01:51:30.137737 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.54s 2026-01-05 01:51:32.587284 | orchestrator | 2026-01-05 01:51:32 | INFO  | Task 20aac649-88db-4738-b497-71ae9bd7b290 (skyline) was prepared for execution. 2026-01-05 01:51:32.587384 | orchestrator | 2026-01-05 01:51:32 | INFO  | It takes a moment until task 20aac649-88db-4738-b497-71ae9bd7b290 (skyline) has been started and output is visible here. 2026-01-05 01:52:04.883269 | orchestrator | 2026-01-05 01:52:04.883399 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 01:52:04.883415 | orchestrator | 2026-01-05 01:52:04.883422 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 01:52:04.883429 | orchestrator | Monday 05 January 2026 01:51:36 +0000 (0:00:00.277) 0:00:00.277 ******** 2026-01-05 01:52:04.883436 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:52:04.883443 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:52:04.883448 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:52:04.883452 | orchestrator | 2026-01-05 01:52:04.883456 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 01:52:04.883461 | orchestrator | Monday 05 January 2026 01:51:37 +0000 (0:00:00.346) 0:00:00.623 ******** 2026-01-05 01:52:04.883465 | orchestrator | ok: [testbed-node-0] => (item=enable_skyline_True) 2026-01-05 01:52:04.883469 | orchestrator | ok: [testbed-node-1] => (item=enable_skyline_True) 2026-01-05 01:52:04.883474 | orchestrator | ok: [testbed-node-2] => (item=enable_skyline_True) 2026-01-05 01:52:04.883497 | orchestrator | 2026-01-05 01:52:04.883501 | orchestrator | PLAY [Apply role skyline] ****************************************************** 2026-01-05 01:52:04.883505 | orchestrator | 2026-01-05 01:52:04.883509 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-01-05 01:52:04.883513 | orchestrator | Monday 05 January 2026 01:51:37 +0000 (0:00:00.450) 0:00:01.074 ******** 2026-01-05 01:52:04.883528 | orchestrator | included: /ansible/roles/skyline/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:52:04.883534 | orchestrator | 2026-01-05 01:52:04.883538 | orchestrator | TASK [service-ks-register : skyline | Creating services] *********************** 2026-01-05 01:52:04.883542 | orchestrator | Monday 05 January 2026 01:51:38 +0000 (0:00:00.564) 0:00:01.638 ******** 2026-01-05 01:52:04.883545 | orchestrator | changed: [testbed-node-0] => (item=skyline (panel)) 2026-01-05 01:52:04.883549 | orchestrator | 2026-01-05 01:52:04.883553 | orchestrator | TASK [service-ks-register : skyline | Creating endpoints] ********************** 2026-01-05 01:52:04.883557 | orchestrator | Monday 05 January 2026 01:51:41 +0000 (0:00:03.566) 0:00:05.205 ******** 2026-01-05 01:52:04.883560 | orchestrator | changed: [testbed-node-0] => (item=skyline -> https://api-int.testbed.osism.xyz:9998 -> internal) 2026-01-05 01:52:04.883565 | orchestrator | changed: [testbed-node-0] => (item=skyline -> https://api.testbed.osism.xyz:9998 -> public) 2026-01-05 01:52:04.883569 | orchestrator | 2026-01-05 01:52:04.883572 | orchestrator | TASK [service-ks-register : skyline | Creating projects] *********************** 2026-01-05 01:52:04.883576 | orchestrator | Monday 05 January 2026 01:51:48 +0000 (0:00:06.664) 0:00:11.870 ******** 2026-01-05 01:52:04.883580 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-05 01:52:04.883585 | orchestrator | 2026-01-05 01:52:04.883589 | orchestrator | TASK [service-ks-register : skyline | Creating users] ************************** 2026-01-05 01:52:04.883592 | orchestrator | Monday 05 January 2026 01:51:51 +0000 (0:00:03.408) 0:00:15.278 ******** 2026-01-05 01:52:04.883596 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-05 01:52:04.883600 | orchestrator | changed: [testbed-node-0] => (item=skyline -> service) 2026-01-05 01:52:04.883604 | orchestrator | 2026-01-05 01:52:04.883608 | orchestrator | TASK [service-ks-register : skyline | Creating roles] ************************** 2026-01-05 01:52:04.883612 | orchestrator | Monday 05 January 2026 01:51:56 +0000 (0:00:04.207) 0:00:19.486 ******** 2026-01-05 01:52:04.883616 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-05 01:52:04.883620 | orchestrator | 2026-01-05 01:52:04.883623 | orchestrator | TASK [service-ks-register : skyline | Granting user roles] ********************* 2026-01-05 01:52:04.883627 | orchestrator | Monday 05 January 2026 01:51:59 +0000 (0:00:03.410) 0:00:22.897 ******** 2026-01-05 01:52:04.883632 | orchestrator | changed: [testbed-node-0] => (item=skyline -> service -> admin) 2026-01-05 01:52:04.883636 | orchestrator | 2026-01-05 01:52:04.883639 | orchestrator | TASK [skyline : Ensuring config directories exist] ***************************** 2026-01-05 01:52:04.883643 | orchestrator | Monday 05 January 2026 01:52:03 +0000 (0:00:03.989) 0:00:26.886 ******** 2026-01-05 01:52:04.883665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-05 01:52:04.883686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-05 01:52:04.883698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-05 01:52:04.883704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-05 01:52:04.883709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-05 01:52:04.883718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-05 01:52:08.843685 | orchestrator | 2026-01-05 01:52:08.843844 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-01-05 01:52:08.843862 | orchestrator | Monday 05 January 2026 01:52:04 +0000 (0:00:01.330) 0:00:28.217 ******** 2026-01-05 01:52:08.843872 | orchestrator | included: /ansible/roles/skyline/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:52:08.843881 | orchestrator | 2026-01-05 01:52:08.843890 | orchestrator | TASK [service-cert-copy : skyline | Copying over extra CA certificates] ******** 2026-01-05 01:52:08.843895 | orchestrator | Monday 05 January 2026 01:52:05 +0000 (0:00:00.753) 0:00:28.971 ******** 2026-01-05 01:52:08.843918 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-05 01:52:08.843928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-05 01:52:08.843934 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-05 01:52:08.843973 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-05 01:52:08.843984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-05 01:52:08.843990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-05 01:52:08.843995 | orchestrator | 2026-01-05 01:52:08.844023 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS certificate] *** 2026-01-05 01:52:08.844029 | orchestrator | Monday 05 January 2026 01:52:08 +0000 (0:00:02.532) 0:00:31.503 ******** 2026-01-05 01:52:08.844034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-01-05 01:52:08.844045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-01-05 01:52:08.844051 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:52:08.844067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-01-05 01:52:10.171308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-01-05 01:52:10.172285 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:52:10.172340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-01-05 01:52:10.172387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-01-05 01:52:10.172404 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:52:10.172418 | orchestrator | 2026-01-05 01:52:10.172435 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS key] ***** 2026-01-05 01:52:10.172451 | orchestrator | Monday 05 January 2026 01:52:08 +0000 (0:00:00.684) 0:00:32.188 ******** 2026-01-05 01:52:10.172482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-01-05 01:52:10.172530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-01-05 01:52:10.172545 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:52:10.172559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-01-05 01:52:10.172584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-01-05 01:52:10.172597 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:52:10.172611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-01-05 01:52:10.172655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-01-05 01:52:19.251412 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:52:19.251536 | orchestrator | 2026-01-05 01:52:19.251554 | orchestrator | TASK [skyline : Copying over skyline.yaml files for services] ****************** 2026-01-05 01:52:19.251565 | orchestrator | Monday 05 January 2026 01:52:10 +0000 (0:00:01.314) 0:00:33.503 ******** 2026-01-05 01:52:19.251577 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-05 01:52:19.251622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-05 01:52:19.251635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-05 01:52:19.251662 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-05 01:52:19.251697 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-05 01:52:19.251718 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-05 01:52:19.251785 | orchestrator | 2026-01-05 01:52:19.251797 | orchestrator | TASK [skyline : Copying over gunicorn.py files for services] ******************* 2026-01-05 01:52:19.251807 | orchestrator | Monday 05 January 2026 01:52:12 +0000 (0:00:02.667) 0:00:36.170 ******** 2026-01-05 01:52:19.251816 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-01-05 01:52:19.251826 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-01-05 01:52:19.251836 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-01-05 01:52:19.251845 | orchestrator | 2026-01-05 01:52:19.251856 | orchestrator | TASK [skyline : Copying over nginx.conf files for services] ******************** 2026-01-05 01:52:19.251867 | orchestrator | Monday 05 January 2026 01:52:14 +0000 (0:00:01.687) 0:00:37.858 ******** 2026-01-05 01:52:19.251878 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-01-05 01:52:19.251888 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-01-05 01:52:19.251898 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-01-05 01:52:19.251907 | orchestrator | 2026-01-05 01:52:19.251915 | orchestrator | TASK [skyline : Copying over config.json files for services] ******************* 2026-01-05 01:52:19.251925 | orchestrator | Monday 05 January 2026 01:52:16 +0000 (0:00:02.240) 0:00:40.098 ******** 2026-01-05 01:52:19.251943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-05 01:52:19.251969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-05 01:52:21.386317 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-05 01:52:21.386404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-05 01:52:21.386429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-05 01:52:21.386437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-05 01:52:21.386465 | orchestrator | 2026-01-05 01:52:21.386474 | orchestrator | TASK [skyline : Copying over custom logos] ************************************* 2026-01-05 01:52:21.386482 | orchestrator | Monday 05 January 2026 01:52:19 +0000 (0:00:02.498) 0:00:42.596 ******** 2026-01-05 01:52:21.386488 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:52:21.386496 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:52:21.386502 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:52:21.386509 | orchestrator | 2026-01-05 01:52:21.386528 | orchestrator | TASK [skyline : Check skyline container] *************************************** 2026-01-05 01:52:21.386535 | orchestrator | Monday 05 January 2026 01:52:19 +0000 (0:00:00.300) 0:00:42.897 ******** 2026-01-05 01:52:21.386542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-05 01:52:21.386549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-05 01:52:21.386560 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-05 01:52:21.386567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-05 01:52:21.386587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-05 01:52:48.522772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-05 01:52:48.522895 | orchestrator | 2026-01-05 01:52:48.522912 | orchestrator | TASK [skyline : Creating Skyline database] ************************************* 2026-01-05 01:52:48.522925 | orchestrator | Monday 05 January 2026 01:52:21 +0000 (0:00:01.828) 0:00:44.726 ******** 2026-01-05 01:52:48.522936 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:52:48.522948 | orchestrator | 2026-01-05 01:52:48.522959 | orchestrator | TASK [skyline : Creating Skyline database user and setting permissions] ******** 2026-01-05 01:52:48.522981 | orchestrator | Monday 05 January 2026 01:52:23 +0000 (0:00:02.202) 0:00:46.929 ******** 2026-01-05 01:52:48.522991 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:52:48.523002 | orchestrator | 2026-01-05 01:52:48.523013 | orchestrator | TASK [skyline : Running Skyline bootstrap container] *************************** 2026-01-05 01:52:48.523023 | orchestrator | Monday 05 January 2026 01:52:25 +0000 (0:00:02.388) 0:00:49.317 ******** 2026-01-05 01:52:48.523034 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:52:48.523044 | orchestrator | 2026-01-05 01:52:48.523055 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-01-05 01:52:48.523066 | orchestrator | Monday 05 January 2026 01:52:33 +0000 (0:00:07.667) 0:00:56.984 ******** 2026-01-05 01:52:48.523076 | orchestrator | 2026-01-05 01:52:48.523087 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-01-05 01:52:48.523097 | orchestrator | Monday 05 January 2026 01:52:33 +0000 (0:00:00.071) 0:00:57.056 ******** 2026-01-05 01:52:48.523132 | orchestrator | 2026-01-05 01:52:48.523143 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-01-05 01:52:48.523153 | orchestrator | Monday 05 January 2026 01:52:33 +0000 (0:00:00.070) 0:00:57.127 ******** 2026-01-05 01:52:48.523164 | orchestrator | 2026-01-05 01:52:48.523174 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-apiserver container] **************** 2026-01-05 01:52:48.523200 | orchestrator | Monday 05 January 2026 01:52:33 +0000 (0:00:00.075) 0:00:57.202 ******** 2026-01-05 01:52:48.523211 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:52:48.523221 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:52:48.523231 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:52:48.523242 | orchestrator | 2026-01-05 01:52:48.523252 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-console container] ****************** 2026-01-05 01:52:48.523263 | orchestrator | Monday 05 January 2026 01:52:39 +0000 (0:00:06.069) 0:01:03.272 ******** 2026-01-05 01:52:48.523273 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:52:48.523283 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:52:48.523294 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:52:48.523304 | orchestrator | 2026-01-05 01:52:48.523329 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:52:48.523351 | orchestrator | testbed-node-0 : ok=22  changed=16  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-05 01:52:48.523364 | orchestrator | testbed-node-1 : ok=13  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-05 01:52:48.523374 | orchestrator | testbed-node-2 : ok=13  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-05 01:52:48.523384 | orchestrator | 2026-01-05 01:52:48.523395 | orchestrator | 2026-01-05 01:52:48.523405 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:52:48.523415 | orchestrator | Monday 05 January 2026 01:52:48 +0000 (0:00:08.293) 0:01:11.566 ******** 2026-01-05 01:52:48.523426 | orchestrator | =============================================================================== 2026-01-05 01:52:48.523436 | orchestrator | skyline : Restart skyline-console container ----------------------------- 8.29s 2026-01-05 01:52:48.523447 | orchestrator | skyline : Running Skyline bootstrap container --------------------------- 7.67s 2026-01-05 01:52:48.523458 | orchestrator | service-ks-register : skyline | Creating endpoints ---------------------- 6.67s 2026-01-05 01:52:48.523468 | orchestrator | skyline : Restart skyline-apiserver container --------------------------- 6.07s 2026-01-05 01:52:48.523479 | orchestrator | service-ks-register : skyline | Creating users -------------------------- 4.21s 2026-01-05 01:52:48.523489 | orchestrator | service-ks-register : skyline | Granting user roles --------------------- 3.99s 2026-01-05 01:52:48.523499 | orchestrator | service-ks-register : skyline | Creating services ----------------------- 3.57s 2026-01-05 01:52:48.523510 | orchestrator | service-ks-register : skyline | Creating roles -------------------------- 3.41s 2026-01-05 01:52:48.523541 | orchestrator | service-ks-register : skyline | Creating projects ----------------------- 3.41s 2026-01-05 01:52:48.523552 | orchestrator | skyline : Copying over skyline.yaml files for services ------------------ 2.67s 2026-01-05 01:52:48.523562 | orchestrator | service-cert-copy : skyline | Copying over extra CA certificates -------- 2.53s 2026-01-05 01:52:48.523573 | orchestrator | skyline : Copying over config.json files for services ------------------- 2.50s 2026-01-05 01:52:48.523584 | orchestrator | skyline : Creating Skyline database user and setting permissions -------- 2.39s 2026-01-05 01:52:48.523617 | orchestrator | skyline : Copying over nginx.conf files for services -------------------- 2.24s 2026-01-05 01:52:48.523628 | orchestrator | skyline : Creating Skyline database ------------------------------------- 2.20s 2026-01-05 01:52:48.523638 | orchestrator | skyline : Check skyline container --------------------------------------- 1.83s 2026-01-05 01:52:48.523649 | orchestrator | skyline : Copying over gunicorn.py files for services ------------------- 1.69s 2026-01-05 01:52:48.523670 | orchestrator | skyline : Ensuring config directories exist ----------------------------- 1.33s 2026-01-05 01:52:48.523682 | orchestrator | service-cert-copy : skyline | Copying over backend internal TLS key ----- 1.31s 2026-01-05 01:52:48.523694 | orchestrator | skyline : include_tasks ------------------------------------------------- 0.75s 2026-01-05 01:52:50.791139 | orchestrator | 2026-01-05 01:52:50 | INFO  | Task fdfa4df8-5ab3-41e4-bbe9-903e42f1e407 (glance) was prepared for execution. 2026-01-05 01:52:50.791307 | orchestrator | 2026-01-05 01:52:50 | INFO  | It takes a moment until task fdfa4df8-5ab3-41e4-bbe9-903e42f1e407 (glance) has been started and output is visible here. 2026-01-05 01:53:25.850420 | orchestrator | 2026-01-05 01:53:25.850980 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 01:53:25.851000 | orchestrator | 2026-01-05 01:53:25.851012 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 01:53:25.851023 | orchestrator | Monday 05 January 2026 01:52:55 +0000 (0:00:00.267) 0:00:00.267 ******** 2026-01-05 01:53:25.851034 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:53:25.851048 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:53:25.851059 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:53:25.851071 | orchestrator | 2026-01-05 01:53:25.851083 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 01:53:25.851095 | orchestrator | Monday 05 January 2026 01:52:55 +0000 (0:00:00.321) 0:00:00.589 ******** 2026-01-05 01:53:25.851108 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-01-05 01:53:25.851121 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-01-05 01:53:25.851131 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-01-05 01:53:25.851142 | orchestrator | 2026-01-05 01:53:25.851152 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-01-05 01:53:25.851162 | orchestrator | 2026-01-05 01:53:25.851174 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-01-05 01:53:25.851202 | orchestrator | Monday 05 January 2026 01:52:55 +0000 (0:00:00.448) 0:00:01.037 ******** 2026-01-05 01:53:25.851213 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:53:25.851225 | orchestrator | 2026-01-05 01:53:25.851235 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-01-05 01:53:25.851245 | orchestrator | Monday 05 January 2026 01:52:56 +0000 (0:00:00.594) 0:00:01.632 ******** 2026-01-05 01:53:25.851255 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-01-05 01:53:25.851265 | orchestrator | 2026-01-05 01:53:25.851274 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-01-05 01:53:25.851284 | orchestrator | Monday 05 January 2026 01:53:00 +0000 (0:00:03.549) 0:00:05.182 ******** 2026-01-05 01:53:25.851295 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-01-05 01:53:25.851305 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-01-05 01:53:25.851330 | orchestrator | 2026-01-05 01:53:25.851341 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-01-05 01:53:25.851351 | orchestrator | Monday 05 January 2026 01:53:06 +0000 (0:00:06.827) 0:00:12.009 ******** 2026-01-05 01:53:25.851362 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-05 01:53:25.851373 | orchestrator | 2026-01-05 01:53:25.851383 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-01-05 01:53:25.851393 | orchestrator | Monday 05 January 2026 01:53:10 +0000 (0:00:03.337) 0:00:15.347 ******** 2026-01-05 01:53:25.851402 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-05 01:53:25.851413 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-01-05 01:53:25.851423 | orchestrator | 2026-01-05 01:53:25.851457 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-01-05 01:53:25.851498 | orchestrator | Monday 05 January 2026 01:53:14 +0000 (0:00:04.199) 0:00:19.547 ******** 2026-01-05 01:53:25.851510 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-05 01:53:25.851522 | orchestrator | 2026-01-05 01:53:25.851532 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-01-05 01:53:25.851541 | orchestrator | Monday 05 January 2026 01:53:17 +0000 (0:00:03.289) 0:00:22.836 ******** 2026-01-05 01:53:25.851550 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-01-05 01:53:25.851559 | orchestrator | 2026-01-05 01:53:25.851567 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-01-05 01:53:25.851576 | orchestrator | Monday 05 January 2026 01:53:21 +0000 (0:00:03.952) 0:00:26.789 ******** 2026-01-05 01:53:25.851623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-05 01:53:25.851649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-05 01:53:25.851667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-05 01:53:25.851675 | orchestrator | 2026-01-05 01:53:25.851681 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-01-05 01:53:25.851687 | orchestrator | Monday 05 January 2026 01:53:25 +0000 (0:00:03.413) 0:00:30.202 ******** 2026-01-05 01:53:25.851695 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:53:25.851701 | orchestrator | 2026-01-05 01:53:25.851714 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-01-05 01:53:41.520058 | orchestrator | Monday 05 January 2026 01:53:25 +0000 (0:00:00.806) 0:00:31.008 ******** 2026-01-05 01:53:41.520940 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:53:41.520977 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:53:41.520982 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:53:41.520986 | orchestrator | 2026-01-05 01:53:41.520992 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-01-05 01:53:41.520997 | orchestrator | Monday 05 January 2026 01:53:29 +0000 (0:00:03.688) 0:00:34.697 ******** 2026-01-05 01:53:41.521002 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-05 01:53:41.521008 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-05 01:53:41.521012 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-05 01:53:41.521016 | orchestrator | 2026-01-05 01:53:41.521020 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-01-05 01:53:41.521023 | orchestrator | Monday 05 January 2026 01:53:31 +0000 (0:00:01.624) 0:00:36.321 ******** 2026-01-05 01:53:41.521040 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-05 01:53:41.521044 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-05 01:53:41.521048 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-05 01:53:41.521079 | orchestrator | 2026-01-05 01:53:41.521093 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-01-05 01:53:41.521103 | orchestrator | Monday 05 January 2026 01:53:32 +0000 (0:00:01.423) 0:00:37.744 ******** 2026-01-05 01:53:41.521107 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:53:41.521118 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:53:41.521122 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:53:41.521132 | orchestrator | 2026-01-05 01:53:41.521136 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-01-05 01:53:41.521140 | orchestrator | Monday 05 January 2026 01:53:33 +0000 (0:00:00.712) 0:00:38.457 ******** 2026-01-05 01:53:41.521143 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:53:41.521147 | orchestrator | 2026-01-05 01:53:41.521151 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-01-05 01:53:41.521155 | orchestrator | Monday 05 January 2026 01:53:33 +0000 (0:00:00.148) 0:00:38.605 ******** 2026-01-05 01:53:41.521158 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:53:41.521162 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:53:41.521166 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:53:41.521170 | orchestrator | 2026-01-05 01:53:41.521173 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-01-05 01:53:41.521177 | orchestrator | Monday 05 January 2026 01:53:33 +0000 (0:00:00.322) 0:00:38.928 ******** 2026-01-05 01:53:41.521181 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:53:41.521185 | orchestrator | 2026-01-05 01:53:41.521189 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-01-05 01:53:41.521193 | orchestrator | Monday 05 January 2026 01:53:34 +0000 (0:00:00.774) 0:00:39.703 ******** 2026-01-05 01:53:41.521201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-05 01:53:41.521226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-05 01:53:41.521235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-05 01:53:41.521240 | orchestrator | 2026-01-05 01:53:41.521244 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-01-05 01:53:41.521248 | orchestrator | Monday 05 January 2026 01:53:38 +0000 (0:00:03.836) 0:00:43.539 ******** 2026-01-05 01:53:41.521258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-05 01:53:44.812333 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:53:44.812496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-05 01:53:44.812520 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:53:44.812535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-05 01:53:44.812580 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:53:44.812595 | orchestrator | 2026-01-05 01:53:44.812609 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-01-05 01:53:44.812625 | orchestrator | Monday 05 January 2026 01:53:41 +0000 (0:00:03.142) 0:00:46.683 ******** 2026-01-05 01:53:44.812682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-05 01:53:44.812699 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:53:44.812714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-05 01:53:44.812738 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:53:44.812768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-05 01:54:18.665930 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:54:18.666154 | orchestrator | 2026-01-05 01:54:18.666180 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-01-05 01:54:18.666192 | orchestrator | Monday 05 January 2026 01:53:44 +0000 (0:00:03.288) 0:00:49.971 ******** 2026-01-05 01:54:18.666202 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:54:18.666212 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:54:18.666283 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:54:18.666294 | orchestrator | 2026-01-05 01:54:18.666305 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-01-05 01:54:18.666315 | orchestrator | Monday 05 January 2026 01:53:47 +0000 (0:00:02.963) 0:00:52.935 ******** 2026-01-05 01:54:18.666330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-05 01:54:18.666385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-05 01:54:18.666432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-05 01:54:18.666451 | orchestrator | 2026-01-05 01:54:18.666465 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-01-05 01:54:18.666480 | orchestrator | Monday 05 January 2026 01:53:51 +0000 (0:00:03.833) 0:00:56.768 ******** 2026-01-05 01:54:18.666496 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:54:18.666523 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:54:18.666541 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:54:18.666558 | orchestrator | 2026-01-05 01:54:18.666576 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-01-05 01:54:18.666592 | orchestrator | Monday 05 January 2026 01:53:57 +0000 (0:00:05.624) 0:01:02.392 ******** 2026-01-05 01:54:18.666610 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:54:18.666625 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:54:18.666640 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:54:18.666656 | orchestrator | 2026-01-05 01:54:18.666671 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-01-05 01:54:18.666687 | orchestrator | Monday 05 January 2026 01:54:00 +0000 (0:00:03.433) 0:01:05.826 ******** 2026-01-05 01:54:18.666702 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:54:18.666716 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:54:18.666731 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:54:18.666745 | orchestrator | 2026-01-05 01:54:18.666760 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-01-05 01:54:18.666774 | orchestrator | Monday 05 January 2026 01:54:03 +0000 (0:00:03.276) 0:01:09.103 ******** 2026-01-05 01:54:18.666788 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:54:18.666804 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:54:18.666820 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:54:18.666837 | orchestrator | 2026-01-05 01:54:18.666854 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-01-05 01:54:18.666872 | orchestrator | Monday 05 January 2026 01:54:06 +0000 (0:00:03.063) 0:01:12.167 ******** 2026-01-05 01:54:18.666889 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:54:18.666904 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:54:18.666950 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:54:18.666966 | orchestrator | 2026-01-05 01:54:18.666983 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-01-05 01:54:18.666999 | orchestrator | Monday 05 January 2026 01:54:10 +0000 (0:00:03.554) 0:01:15.722 ******** 2026-01-05 01:54:18.667015 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:54:18.667030 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:54:18.667043 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:54:18.667058 | orchestrator | 2026-01-05 01:54:18.667084 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-01-05 01:54:18.667098 | orchestrator | Monday 05 January 2026 01:54:11 +0000 (0:00:00.538) 0:01:16.260 ******** 2026-01-05 01:54:18.667112 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-01-05 01:54:18.667128 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:54:18.667142 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-01-05 01:54:18.667157 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:54:18.667173 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-01-05 01:54:18.667190 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:54:18.667207 | orchestrator | 2026-01-05 01:54:18.667257 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-01-05 01:54:18.667274 | orchestrator | Monday 05 January 2026 01:54:14 +0000 (0:00:03.173) 0:01:19.433 ******** 2026-01-05 01:54:18.667290 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:54:18.667301 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:54:18.667311 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:54:18.667320 | orchestrator | 2026-01-05 01:54:18.667330 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-01-05 01:54:18.667354 | orchestrator | Monday 05 January 2026 01:54:18 +0000 (0:00:04.391) 0:01:23.825 ******** 2026-01-05 01:55:36.434266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-05 01:55:36.434439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-05 01:55:36.434487 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-05 01:55:36.434513 | orchestrator | 2026-01-05 01:55:36.434528 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-01-05 01:55:36.434540 | orchestrator | Monday 05 January 2026 01:54:22 +0000 (0:00:03.809) 0:01:27.635 ******** 2026-01-05 01:55:36.434553 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:55:36.434565 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:55:36.434576 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:55:36.434588 | orchestrator | 2026-01-05 01:55:36.434601 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-01-05 01:55:36.434612 | orchestrator | Monday 05 January 2026 01:54:22 +0000 (0:00:00.536) 0:01:28.172 ******** 2026-01-05 01:55:36.434623 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:55:36.434633 | orchestrator | 2026-01-05 01:55:36.434643 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-01-05 01:55:36.434652 | orchestrator | Monday 05 January 2026 01:54:25 +0000 (0:00:02.264) 0:01:30.436 ******** 2026-01-05 01:55:36.434662 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:55:36.434673 | orchestrator | 2026-01-05 01:55:36.434684 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-01-05 01:55:36.434696 | orchestrator | Monday 05 January 2026 01:54:27 +0000 (0:00:02.442) 0:01:32.879 ******** 2026-01-05 01:55:36.434707 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:55:36.434719 | orchestrator | 2026-01-05 01:55:36.434729 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-01-05 01:55:36.434741 | orchestrator | Monday 05 January 2026 01:54:29 +0000 (0:00:02.252) 0:01:35.131 ******** 2026-01-05 01:55:36.434752 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:55:36.434762 | orchestrator | 2026-01-05 01:55:36.434773 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-01-05 01:55:36.434784 | orchestrator | Monday 05 January 2026 01:54:58 +0000 (0:00:28.703) 0:02:03.835 ******** 2026-01-05 01:55:36.434796 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:55:36.434807 | orchestrator | 2026-01-05 01:55:36.434819 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-01-05 01:55:36.434830 | orchestrator | Monday 05 January 2026 01:55:00 +0000 (0:00:02.300) 0:02:06.135 ******** 2026-01-05 01:55:36.434842 | orchestrator | 2026-01-05 01:55:36.434854 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-01-05 01:55:36.434866 | orchestrator | Monday 05 January 2026 01:55:01 +0000 (0:00:00.071) 0:02:06.207 ******** 2026-01-05 01:55:36.434878 | orchestrator | 2026-01-05 01:55:36.434890 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-01-05 01:55:36.434904 | orchestrator | Monday 05 January 2026 01:55:01 +0000 (0:00:00.075) 0:02:06.283 ******** 2026-01-05 01:55:36.434916 | orchestrator | 2026-01-05 01:55:36.434928 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-01-05 01:55:36.435113 | orchestrator | Monday 05 January 2026 01:55:01 +0000 (0:00:00.074) 0:02:06.358 ******** 2026-01-05 01:55:36.435140 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:55:36.435149 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:55:36.435158 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:55:36.435177 | orchestrator | 2026-01-05 01:55:36.435192 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:55:36.435202 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-05 01:55:36.435211 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-05 01:55:36.435219 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-05 01:55:36.435226 | orchestrator | 2026-01-05 01:55:36.435233 | orchestrator | 2026-01-05 01:55:36.435240 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:55:36.435248 | orchestrator | Monday 05 January 2026 01:55:36 +0000 (0:00:35.224) 0:02:41.582 ******** 2026-01-05 01:55:36.435255 | orchestrator | =============================================================================== 2026-01-05 01:55:36.435262 | orchestrator | glance : Restart glance-api container ---------------------------------- 35.22s 2026-01-05 01:55:36.435269 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 28.70s 2026-01-05 01:55:36.435276 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.83s 2026-01-05 01:55:36.435296 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 5.62s 2026-01-05 01:55:36.764245 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 4.39s 2026-01-05 01:55:36.764325 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.20s 2026-01-05 01:55:36.764331 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.95s 2026-01-05 01:55:36.764336 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 3.84s 2026-01-05 01:55:36.764341 | orchestrator | glance : Copying over config.json files for services -------------------- 3.83s 2026-01-05 01:55:36.764345 | orchestrator | glance : Check glance containers ---------------------------------------- 3.81s 2026-01-05 01:55:36.764349 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.69s 2026-01-05 01:55:36.764353 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 3.55s 2026-01-05 01:55:36.764358 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.55s 2026-01-05 01:55:36.764362 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 3.43s 2026-01-05 01:55:36.764366 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.41s 2026-01-05 01:55:36.764370 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.34s 2026-01-05 01:55:36.764374 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.29s 2026-01-05 01:55:36.764378 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.29s 2026-01-05 01:55:36.764382 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 3.28s 2026-01-05 01:55:36.764386 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.17s 2026-01-05 01:55:39.076805 | orchestrator | 2026-01-05 01:55:39 | INFO  | Task c514acc0-d108-473d-864f-9fabbc1d0d58 (cinder) was prepared for execution. 2026-01-05 01:55:39.076899 | orchestrator | 2026-01-05 01:55:39 | INFO  | It takes a moment until task c514acc0-d108-473d-864f-9fabbc1d0d58 (cinder) has been started and output is visible here. 2026-01-05 01:56:16.229469 | orchestrator | 2026-01-05 01:56:16.229550 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 01:56:16.229557 | orchestrator | 2026-01-05 01:56:16.229562 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 01:56:16.229567 | orchestrator | Monday 05 January 2026 01:55:43 +0000 (0:00:00.267) 0:00:00.267 ******** 2026-01-05 01:56:16.229589 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:56:16.229595 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:56:16.229599 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:56:16.229603 | orchestrator | 2026-01-05 01:56:16.229607 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 01:56:16.229611 | orchestrator | Monday 05 January 2026 01:55:43 +0000 (0:00:00.323) 0:00:00.590 ******** 2026-01-05 01:56:16.229615 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-01-05 01:56:16.229620 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-01-05 01:56:16.229624 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-01-05 01:56:16.229628 | orchestrator | 2026-01-05 01:56:16.229632 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-01-05 01:56:16.229636 | orchestrator | 2026-01-05 01:56:16.229640 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-05 01:56:16.229644 | orchestrator | Monday 05 January 2026 01:55:44 +0000 (0:00:00.492) 0:00:01.082 ******** 2026-01-05 01:56:16.229648 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:56:16.229654 | orchestrator | 2026-01-05 01:56:16.229658 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-01-05 01:56:16.229662 | orchestrator | Monday 05 January 2026 01:55:44 +0000 (0:00:00.553) 0:00:01.636 ******** 2026-01-05 01:56:16.229666 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-01-05 01:56:16.229670 | orchestrator | 2026-01-05 01:56:16.229674 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-01-05 01:56:16.229689 | orchestrator | Monday 05 January 2026 01:55:48 +0000 (0:00:03.680) 0:00:05.316 ******** 2026-01-05 01:56:16.229694 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-01-05 01:56:16.229698 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-01-05 01:56:16.229702 | orchestrator | 2026-01-05 01:56:16.229706 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-01-05 01:56:16.229710 | orchestrator | Monday 05 January 2026 01:55:55 +0000 (0:00:06.772) 0:00:12.089 ******** 2026-01-05 01:56:16.229714 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-05 01:56:16.229718 | orchestrator | 2026-01-05 01:56:16.229722 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-01-05 01:56:16.229726 | orchestrator | Monday 05 January 2026 01:55:58 +0000 (0:00:03.577) 0:00:15.667 ******** 2026-01-05 01:56:16.229730 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-05 01:56:16.229734 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-01-05 01:56:16.229739 | orchestrator | 2026-01-05 01:56:16.229742 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-01-05 01:56:16.229746 | orchestrator | Monday 05 January 2026 01:56:03 +0000 (0:00:04.323) 0:00:19.990 ******** 2026-01-05 01:56:16.229750 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-05 01:56:16.229755 | orchestrator | 2026-01-05 01:56:16.229758 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-01-05 01:56:16.229762 | orchestrator | Monday 05 January 2026 01:56:06 +0000 (0:00:03.454) 0:00:23.445 ******** 2026-01-05 01:56:16.229766 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-01-05 01:56:16.229770 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-01-05 01:56:16.229774 | orchestrator | 2026-01-05 01:56:16.229778 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-01-05 01:56:16.229782 | orchestrator | Monday 05 January 2026 01:56:14 +0000 (0:00:07.615) 0:00:31.061 ******** 2026-01-05 01:56:16.229789 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-05 01:56:16.229856 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-05 01:56:16.229867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-05 01:56:16.229873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 01:56:16.229879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 01:56:16.229883 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 01:56:16.229893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-05 01:56:16.229902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-05 01:56:22.322646 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-05 01:56:22.322757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-05 01:56:22.322768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-05 01:56:22.322864 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-05 01:56:22.322875 | orchestrator | 2026-01-05 01:56:22.322882 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-05 01:56:22.322890 | orchestrator | Monday 05 January 2026 01:56:16 +0000 (0:00:02.216) 0:00:33.277 ******** 2026-01-05 01:56:22.322896 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:56:22.322955 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:56:22.322965 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:56:22.322971 | orchestrator | 2026-01-05 01:56:22.322978 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-05 01:56:22.322984 | orchestrator | Monday 05 January 2026 01:56:16 +0000 (0:00:00.497) 0:00:33.774 ******** 2026-01-05 01:56:22.322991 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:56:22.322997 | orchestrator | 2026-01-05 01:56:22.323003 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-01-05 01:56:22.323009 | orchestrator | Monday 05 January 2026 01:56:17 +0000 (0:00:00.587) 0:00:34.362 ******** 2026-01-05 01:56:22.323016 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-01-05 01:56:22.323022 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-01-05 01:56:22.323028 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-01-05 01:56:22.323035 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-01-05 01:56:22.323041 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-01-05 01:56:22.323047 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-01-05 01:56:22.323053 | orchestrator | 2026-01-05 01:56:22.323059 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-01-05 01:56:22.323065 | orchestrator | Monday 05 January 2026 01:56:19 +0000 (0:00:01.721) 0:00:36.083 ******** 2026-01-05 01:56:22.323099 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-05 01:56:22.323108 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-05 01:56:22.323123 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-05 01:56:22.323132 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-05 01:56:22.323143 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-05 01:56:33.921191 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-05 01:56:33.921269 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-05 01:56:33.921300 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-05 01:56:33.921310 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-05 01:56:33.921316 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-05 01:56:33.921334 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-05 01:56:33.921343 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-05 01:56:33.921353 | orchestrator | 2026-01-05 01:56:33.921360 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-01-05 01:56:33.921366 | orchestrator | Monday 05 January 2026 01:56:22 +0000 (0:00:03.492) 0:00:39.576 ******** 2026-01-05 01:56:33.921371 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-01-05 01:56:33.921378 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-01-05 01:56:33.921383 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-01-05 01:56:33.921388 | orchestrator | 2026-01-05 01:56:33.921392 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-01-05 01:56:33.921397 | orchestrator | Monday 05 January 2026 01:56:24 +0000 (0:00:01.624) 0:00:41.200 ******** 2026-01-05 01:56:33.921404 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-01-05 01:56:33.921412 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-01-05 01:56:33.921419 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-01-05 01:56:33.921426 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-01-05 01:56:33.921433 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-01-05 01:56:33.921440 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-01-05 01:56:33.921447 | orchestrator | 2026-01-05 01:56:33.921454 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-01-05 01:56:33.921462 | orchestrator | Monday 05 January 2026 01:56:27 +0000 (0:00:02.929) 0:00:44.130 ******** 2026-01-05 01:56:33.921470 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-01-05 01:56:33.921478 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-01-05 01:56:33.921485 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-01-05 01:56:33.921489 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-01-05 01:56:33.921494 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-01-05 01:56:33.921498 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-01-05 01:56:33.921504 | orchestrator | 2026-01-05 01:56:33.921511 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-01-05 01:56:33.921518 | orchestrator | Monday 05 January 2026 01:56:28 +0000 (0:00:01.107) 0:00:45.237 ******** 2026-01-05 01:56:33.921529 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:56:33.921539 | orchestrator | 2026-01-05 01:56:33.921546 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-01-05 01:56:33.921552 | orchestrator | Monday 05 January 2026 01:56:28 +0000 (0:00:00.138) 0:00:45.376 ******** 2026-01-05 01:56:33.921559 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:56:33.921567 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:56:33.921574 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:56:33.921582 | orchestrator | 2026-01-05 01:56:33.921589 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-05 01:56:33.921598 | orchestrator | Monday 05 January 2026 01:56:28 +0000 (0:00:00.525) 0:00:45.902 ******** 2026-01-05 01:56:33.921604 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:56:33.921612 | orchestrator | 2026-01-05 01:56:33.921619 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-01-05 01:56:33.921627 | orchestrator | Monday 05 January 2026 01:56:29 +0000 (0:00:00.572) 0:00:46.475 ******** 2026-01-05 01:56:33.921644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-05 01:56:34.814712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-05 01:56:34.814861 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-05 01:56:34.814874 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 01:56:34.814883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 01:56:34.814890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 01:56:34.814942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-05 01:56:34.814950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-05 01:56:34.814957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-05 01:56:34.814965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-05 01:56:34.814972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-05 01:56:34.814984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-05 01:56:34.814991 | orchestrator | 2026-01-05 01:56:34.814999 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-01-05 01:56:34.815008 | orchestrator | Monday 05 January 2026 01:56:33 +0000 (0:00:04.493) 0:00:50.968 ******** 2026-01-05 01:56:34.815025 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-05 01:56:34.918442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 01:56:34.918523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-05 01:56:34.918531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-05 01:56:34.918555 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:56:34.918562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-05 01:56:34.918577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 01:56:34.918594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-05 01:56:34.918598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-05 01:56:34.918602 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:56:34.918606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-05 01:56:34.918615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 01:56:34.918619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-05 01:56:34.918626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-05 01:56:34.918630 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:56:34.918635 | orchestrator | 2026-01-05 01:56:34.918640 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-01-05 01:56:34.918648 | orchestrator | Monday 05 January 2026 01:56:34 +0000 (0:00:00.907) 0:00:51.876 ******** 2026-01-05 01:56:35.484550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-05 01:56:35.484630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 01:56:35.484667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-05 01:56:35.484674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-05 01:56:35.484678 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:56:35.484695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-05 01:56:35.484711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 01:56:35.484716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-05 01:56:35.484720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-05 01:56:35.484728 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:56:35.484732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-05 01:56:35.484740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 01:56:35.484769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-05 01:56:40.256023 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-05 01:56:40.256113 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:56:40.256122 | orchestrator | 2026-01-05 01:56:40.256128 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-01-05 01:56:40.256135 | orchestrator | Monday 05 January 2026 01:56:35 +0000 (0:00:00.887) 0:00:52.764 ******** 2026-01-05 01:56:40.256159 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-05 01:56:40.256167 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-05 01:56:40.256183 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-05 01:56:40.256202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 01:56:40.256209 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 01:56:40.256218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 01:56:40.256223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-05 01:56:40.256228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-05 01:56:40.256237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-05 01:56:40.256245 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-05 01:56:53.248434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-05 01:56:53.248557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-05 01:56:53.248567 | orchestrator | 2026-01-05 01:56:53.248573 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-01-05 01:56:53.248579 | orchestrator | Monday 05 January 2026 01:56:40 +0000 (0:00:04.548) 0:00:57.313 ******** 2026-01-05 01:56:53.248583 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-01-05 01:56:53.248588 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-01-05 01:56:53.248592 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-01-05 01:56:53.248596 | orchestrator | 2026-01-05 01:56:53.248601 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-01-05 01:56:53.248605 | orchestrator | Monday 05 January 2026 01:56:42 +0000 (0:00:01.959) 0:00:59.272 ******** 2026-01-05 01:56:53.248609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-05 01:56:53.248627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-05 01:56:53.248644 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-05 01:56:53.248654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 01:56:53.248659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 01:56:53.248663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 01:56:53.248669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-05 01:56:53.248674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-05 01:56:53.248682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-05 01:56:55.439029 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-05 01:56:55.439106 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-05 01:56:55.439112 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-05 01:56:55.439117 | orchestrator | 2026-01-05 01:56:55.439122 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-01-05 01:56:55.439128 | orchestrator | Monday 05 January 2026 01:56:53 +0000 (0:00:11.026) 0:01:10.298 ******** 2026-01-05 01:56:55.439132 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:56:55.439137 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:56:55.439141 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:56:55.439145 | orchestrator | 2026-01-05 01:56:55.439149 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-01-05 01:56:55.439165 | orchestrator | Monday 05 January 2026 01:56:54 +0000 (0:00:01.528) 0:01:11.827 ******** 2026-01-05 01:56:55.439171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-05 01:56:55.439192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 01:56:55.439208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-05 01:56:55.439212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-05 01:56:55.439216 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:56:55.439220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-05 01:56:55.439227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 01:56:55.439235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-05 01:56:55.439244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-05 01:56:59.055792 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:56:59.055873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-05 01:56:59.055887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 01:56:59.055911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-05 01:56:59.055919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-05 01:56:59.055941 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:56:59.055947 | orchestrator | 2026-01-05 01:56:59.055952 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-01-05 01:56:59.055957 | orchestrator | Monday 05 January 2026 01:56:55 +0000 (0:00:00.663) 0:01:12.490 ******** 2026-01-05 01:56:59.055961 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:56:59.055964 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:56:59.055968 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:56:59.055972 | orchestrator | 2026-01-05 01:56:59.055976 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-01-05 01:56:59.055980 | orchestrator | Monday 05 January 2026 01:56:56 +0000 (0:00:00.562) 0:01:13.053 ******** 2026-01-05 01:56:59.055999 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-05 01:56:59.056009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-05 01:56:59.056016 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-05 01:56:59.056031 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 01:56:59.056038 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 01:56:59.056044 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 01:56:59.056057 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-05 01:58:24.416698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-05 01:58:24.417583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-05 01:58:24.417687 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-05 01:58:24.417707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-05 01:58:24.417715 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-05 01:58:24.417723 | orchestrator | 2026-01-05 01:58:24.417730 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-05 01:58:24.417735 | orchestrator | Monday 05 January 2026 01:56:59 +0000 (0:00:03.052) 0:01:16.105 ******** 2026-01-05 01:58:24.417739 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:58:24.417743 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:58:24.417747 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:58:24.417751 | orchestrator | 2026-01-05 01:58:24.417755 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-01-05 01:58:24.417759 | orchestrator | Monday 05 January 2026 01:56:59 +0000 (0:00:00.325) 0:01:16.431 ******** 2026-01-05 01:58:24.417763 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:58:24.417767 | orchestrator | 2026-01-05 01:58:24.417788 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-01-05 01:58:24.417792 | orchestrator | Monday 05 January 2026 01:57:01 +0000 (0:00:02.174) 0:01:18.605 ******** 2026-01-05 01:58:24.417796 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:58:24.417799 | orchestrator | 2026-01-05 01:58:24.417803 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-01-05 01:58:24.417807 | orchestrator | Monday 05 January 2026 01:57:04 +0000 (0:00:02.418) 0:01:21.024 ******** 2026-01-05 01:58:24.417811 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:58:24.417814 | orchestrator | 2026-01-05 01:58:24.417818 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-01-05 01:58:24.417822 | orchestrator | Monday 05 January 2026 01:57:23 +0000 (0:00:19.787) 0:01:40.811 ******** 2026-01-05 01:58:24.417831 | orchestrator | 2026-01-05 01:58:24.417835 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-01-05 01:58:24.417839 | orchestrator | Monday 05 January 2026 01:57:24 +0000 (0:00:00.280) 0:01:41.092 ******** 2026-01-05 01:58:24.417842 | orchestrator | 2026-01-05 01:58:24.417846 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-01-05 01:58:24.417850 | orchestrator | Monday 05 January 2026 01:57:24 +0000 (0:00:00.070) 0:01:41.162 ******** 2026-01-05 01:58:24.417854 | orchestrator | 2026-01-05 01:58:24.417858 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-01-05 01:58:24.417862 | orchestrator | Monday 05 January 2026 01:57:24 +0000 (0:00:00.072) 0:01:41.235 ******** 2026-01-05 01:58:24.417865 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:58:24.417869 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:58:24.417873 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:58:24.417876 | orchestrator | 2026-01-05 01:58:24.417880 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-01-05 01:58:24.417884 | orchestrator | Monday 05 January 2026 01:57:46 +0000 (0:00:22.215) 0:02:03.451 ******** 2026-01-05 01:58:24.417888 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:58:24.417891 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:58:24.417895 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:58:24.417899 | orchestrator | 2026-01-05 01:58:24.417903 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-01-05 01:58:24.417907 | orchestrator | Monday 05 January 2026 01:57:51 +0000 (0:00:05.085) 0:02:08.536 ******** 2026-01-05 01:58:24.417911 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:58:24.417915 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:58:24.417919 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:58:24.417923 | orchestrator | 2026-01-05 01:58:24.417930 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-01-05 01:58:24.417934 | orchestrator | Monday 05 January 2026 01:58:17 +0000 (0:00:26.360) 0:02:34.897 ******** 2026-01-05 01:58:24.417938 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:58:24.417942 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:58:24.417945 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:58:24.417949 | orchestrator | 2026-01-05 01:58:24.417953 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-01-05 01:58:24.417958 | orchestrator | Monday 05 January 2026 01:58:24 +0000 (0:00:06.186) 0:02:41.083 ******** 2026-01-05 01:58:24.417962 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:58:24.417968 | orchestrator | 2026-01-05 01:58:24.417975 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 01:58:24.417983 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-05 01:58:24.417991 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-05 01:58:24.417998 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-05 01:58:24.418003 | orchestrator | 2026-01-05 01:58:24.418009 | orchestrator | 2026-01-05 01:58:24.418128 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 01:58:24.418138 | orchestrator | Monday 05 January 2026 01:58:24 +0000 (0:00:00.276) 0:02:41.359 ******** 2026-01-05 01:58:24.418145 | orchestrator | =============================================================================== 2026-01-05 01:58:24.418150 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 26.36s 2026-01-05 01:58:24.418157 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 22.22s 2026-01-05 01:58:24.418163 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 19.79s 2026-01-05 01:58:24.418169 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 11.03s 2026-01-05 01:58:24.418184 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.62s 2026-01-05 01:58:24.418190 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.77s 2026-01-05 01:58:24.418196 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 6.19s 2026-01-05 01:58:24.418202 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 5.09s 2026-01-05 01:58:24.418209 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.55s 2026-01-05 01:58:24.418215 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.49s 2026-01-05 01:58:24.418221 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.32s 2026-01-05 01:58:24.418228 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.68s 2026-01-05 01:58:24.418234 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.58s 2026-01-05 01:58:24.418240 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.49s 2026-01-05 01:58:24.418258 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.45s 2026-01-05 01:58:24.799986 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.05s 2026-01-05 01:58:24.800083 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.93s 2026-01-05 01:58:24.800094 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.42s 2026-01-05 01:58:24.800101 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.22s 2026-01-05 01:58:24.800107 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.17s 2026-01-05 01:58:27.177872 | orchestrator | 2026-01-05 01:58:27 | INFO  | Task 0e93fc24-1552-453c-97e5-a48b99f88817 (barbican) was prepared for execution. 2026-01-05 01:58:27.178277 | orchestrator | 2026-01-05 01:58:27 | INFO  | It takes a moment until task 0e93fc24-1552-453c-97e5-a48b99f88817 (barbican) has been started and output is visible here. 2026-01-05 01:59:13.485722 | orchestrator | 2026-01-05 01:59:13.486692 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 01:59:13.486738 | orchestrator | 2026-01-05 01:59:13.486747 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 01:59:13.486755 | orchestrator | Monday 05 January 2026 01:58:31 +0000 (0:00:00.274) 0:00:00.274 ******** 2026-01-05 01:59:13.486763 | orchestrator | ok: [testbed-node-0] 2026-01-05 01:59:13.486772 | orchestrator | ok: [testbed-node-1] 2026-01-05 01:59:13.486779 | orchestrator | ok: [testbed-node-2] 2026-01-05 01:59:13.486786 | orchestrator | 2026-01-05 01:59:13.486794 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 01:59:13.486802 | orchestrator | Monday 05 January 2026 01:58:31 +0000 (0:00:00.301) 0:00:00.576 ******** 2026-01-05 01:59:13.486809 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-01-05 01:59:13.486817 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-01-05 01:59:13.486824 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-01-05 01:59:13.486831 | orchestrator | 2026-01-05 01:59:13.486838 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-01-05 01:59:13.486845 | orchestrator | 2026-01-05 01:59:13.486852 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-01-05 01:59:13.486876 | orchestrator | Monday 05 January 2026 01:58:32 +0000 (0:00:00.452) 0:00:01.029 ******** 2026-01-05 01:59:13.486884 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:59:13.486892 | orchestrator | 2026-01-05 01:59:13.486899 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-01-05 01:59:13.486907 | orchestrator | Monday 05 January 2026 01:58:32 +0000 (0:00:00.579) 0:00:01.609 ******** 2026-01-05 01:59:13.486914 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-01-05 01:59:13.486942 | orchestrator | 2026-01-05 01:59:13.486949 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-01-05 01:59:13.486956 | orchestrator | Monday 05 January 2026 01:58:36 +0000 (0:00:03.794) 0:00:05.404 ******** 2026-01-05 01:59:13.486963 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-01-05 01:59:13.486971 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-01-05 01:59:13.486978 | orchestrator | 2026-01-05 01:59:13.486985 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-01-05 01:59:13.486992 | orchestrator | Monday 05 January 2026 01:58:43 +0000 (0:00:06.805) 0:00:12.210 ******** 2026-01-05 01:59:13.486999 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-05 01:59:13.487007 | orchestrator | 2026-01-05 01:59:13.487014 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-01-05 01:59:13.487021 | orchestrator | Monday 05 January 2026 01:58:46 +0000 (0:00:03.450) 0:00:15.660 ******** 2026-01-05 01:59:13.487029 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-05 01:59:13.487037 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-01-05 01:59:13.487045 | orchestrator | 2026-01-05 01:59:13.487052 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-01-05 01:59:13.487059 | orchestrator | Monday 05 January 2026 01:58:50 +0000 (0:00:04.210) 0:00:19.871 ******** 2026-01-05 01:59:13.487067 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-05 01:59:13.487075 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-01-05 01:59:13.487082 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-01-05 01:59:13.487089 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-01-05 01:59:13.487098 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-01-05 01:59:13.487105 | orchestrator | 2026-01-05 01:59:13.487113 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-01-05 01:59:13.487120 | orchestrator | Monday 05 January 2026 01:59:07 +0000 (0:00:16.626) 0:00:36.497 ******** 2026-01-05 01:59:13.487127 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-01-05 01:59:13.487134 | orchestrator | 2026-01-05 01:59:13.487142 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-01-05 01:59:13.487149 | orchestrator | Monday 05 January 2026 01:59:11 +0000 (0:00:04.068) 0:00:40.566 ******** 2026-01-05 01:59:13.487162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-05 01:59:13.487199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-05 01:59:13.487215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-05 01:59:13.487222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-05 01:59:13.487230 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-05 01:59:13.487234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-05 01:59:13.487245 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:59:19.881318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:59:19.881437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:59:19.881448 | orchestrator | 2026-01-05 01:59:19.881456 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-01-05 01:59:19.881463 | orchestrator | Monday 05 January 2026 01:59:13 +0000 (0:00:01.788) 0:00:42.355 ******** 2026-01-05 01:59:19.881470 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-01-05 01:59:19.881476 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-01-05 01:59:19.881481 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-01-05 01:59:19.881487 | orchestrator | 2026-01-05 01:59:19.881492 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-01-05 01:59:19.881498 | orchestrator | Monday 05 January 2026 01:59:14 +0000 (0:00:01.138) 0:00:43.493 ******** 2026-01-05 01:59:19.881505 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:59:19.881511 | orchestrator | 2026-01-05 01:59:19.881517 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-01-05 01:59:19.881522 | orchestrator | Monday 05 January 2026 01:59:15 +0000 (0:00:00.435) 0:00:43.928 ******** 2026-01-05 01:59:19.881528 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:59:19.881534 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:59:19.881540 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:59:19.881546 | orchestrator | 2026-01-05 01:59:19.881552 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-01-05 01:59:19.881558 | orchestrator | Monday 05 January 2026 01:59:15 +0000 (0:00:00.332) 0:00:44.261 ******** 2026-01-05 01:59:19.881565 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 01:59:19.881571 | orchestrator | 2026-01-05 01:59:19.881578 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-01-05 01:59:19.881584 | orchestrator | Monday 05 January 2026 01:59:15 +0000 (0:00:00.560) 0:00:44.821 ******** 2026-01-05 01:59:19.881592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-05 01:59:19.881637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-05 01:59:19.881646 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-05 01:59:19.881651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-05 01:59:19.881657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-05 01:59:19.881661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-05 01:59:19.881669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:59:19.881678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:59:21.445532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:59:21.445643 | orchestrator | 2026-01-05 01:59:21.445661 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-01-05 01:59:21.445675 | orchestrator | Monday 05 January 2026 01:59:19 +0000 (0:00:03.920) 0:00:48.741 ******** 2026-01-05 01:59:21.445689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-05 01:59:21.445702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-05 01:59:21.445715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-05 01:59:21.445751 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:59:21.445764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-05 01:59:21.445800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-05 01:59:21.445814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-05 01:59:21.445825 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:59:21.445836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-05 01:59:21.445848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-05 01:59:21.445868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-05 01:59:21.445879 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:59:21.445890 | orchestrator | 2026-01-05 01:59:21.445901 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-01-05 01:59:21.445913 | orchestrator | Monday 05 January 2026 01:59:20 +0000 (0:00:00.709) 0:00:49.451 ******** 2026-01-05 01:59:21.445939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-05 01:59:24.997058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-05 01:59:24.997147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-05 01:59:24.997158 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:59:24.997164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-05 01:59:24.997187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-05 01:59:24.997192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-05 01:59:24.997196 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:59:24.997212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-05 01:59:24.997217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-05 01:59:24.997221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-05 01:59:24.997229 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:59:24.997233 | orchestrator | 2026-01-05 01:59:24.997238 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-01-05 01:59:24.997243 | orchestrator | Monday 05 January 2026 01:59:21 +0000 (0:00:00.873) 0:00:50.325 ******** 2026-01-05 01:59:24.997317 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-05 01:59:24.997325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-05 01:59:24.997336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-05 01:59:34.801235 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-05 01:59:34.801424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-05 01:59:34.801438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-05 01:59:34.801450 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:59:34.801460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:59:34.801533 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:59:34.801546 | orchestrator | 2026-01-05 01:59:34.801558 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-01-05 01:59:34.801569 | orchestrator | Monday 05 January 2026 01:59:24 +0000 (0:00:03.548) 0:00:53.873 ******** 2026-01-05 01:59:34.801578 | orchestrator | changed: [testbed-node-0] 2026-01-05 01:59:34.801589 | orchestrator | changed: [testbed-node-1] 2026-01-05 01:59:34.801598 | orchestrator | changed: [testbed-node-2] 2026-01-05 01:59:34.801607 | orchestrator | 2026-01-05 01:59:34.801633 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-01-05 01:59:34.801643 | orchestrator | Monday 05 January 2026 01:59:26 +0000 (0:00:01.589) 0:00:55.462 ******** 2026-01-05 01:59:34.801661 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 01:59:34.801670 | orchestrator | 2026-01-05 01:59:34.801679 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-01-05 01:59:34.801688 | orchestrator | Monday 05 January 2026 01:59:27 +0000 (0:00:00.975) 0:00:56.438 ******** 2026-01-05 01:59:34.801697 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:59:34.801707 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:59:34.801716 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:59:34.801725 | orchestrator | 2026-01-05 01:59:34.801734 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-01-05 01:59:34.801744 | orchestrator | Monday 05 January 2026 01:59:28 +0000 (0:00:00.563) 0:00:57.001 ******** 2026-01-05 01:59:34.801755 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-05 01:59:34.801767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-05 01:59:34.801782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-05 01:59:34.801799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-05 01:59:35.672848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-05 01:59:35.672942 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-05 01:59:35.672954 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:59:35.672963 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:59:35.672969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:59:35.672977 | orchestrator | 2026-01-05 01:59:35.672986 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-01-05 01:59:35.673010 | orchestrator | Monday 05 January 2026 01:59:34 +0000 (0:00:06.676) 0:01:03.677 ******** 2026-01-05 01:59:35.673032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-05 01:59:35.673062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-05 01:59:35.673070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-05 01:59:35.673075 | orchestrator | skipping: [testbed-node-0] 2026-01-05 01:59:35.673080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-05 01:59:35.673084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-05 01:59:35.673092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-05 01:59:35.673099 | orchestrator | skipping: [testbed-node-1] 2026-01-05 01:59:35.673109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-05 01:59:38.242371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-05 01:59:38.242483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-05 01:59:38.242497 | orchestrator | skipping: [testbed-node-2] 2026-01-05 01:59:38.242508 | orchestrator | 2026-01-05 01:59:38.242519 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-01-05 01:59:38.242531 | orchestrator | Monday 05 January 2026 01:59:35 +0000 (0:00:00.867) 0:01:04.545 ******** 2026-01-05 01:59:38.242541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-05 01:59:38.242586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-05 01:59:38.242640 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-05 01:59:38.242647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-05 01:59:38.242655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-05 01:59:38.242660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-05 01:59:38.242670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:59:38.242683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:59:38.242689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-05 01:59:38.242694 | orchestrator | 2026-01-05 01:59:38.242700 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-01-05 01:59:38.242709 | orchestrator | Monday 05 January 2026 01:59:38 +0000 (0:00:02.571) 0:01:07.116 ******** 2026-01-05 02:00:15.682861 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:00:15.682983 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:00:15.682994 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:00:15.683001 | orchestrator | 2026-01-05 02:00:15.683009 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-01-05 02:00:15.683017 | orchestrator | Monday 05 January 2026 01:59:38 +0000 (0:00:00.307) 0:01:07.423 ******** 2026-01-05 02:00:15.683023 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:00:15.683030 | orchestrator | 2026-01-05 02:00:15.683036 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-01-05 02:00:15.683043 | orchestrator | Monday 05 January 2026 01:59:40 +0000 (0:00:02.326) 0:01:09.750 ******** 2026-01-05 02:00:15.683050 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:00:15.683056 | orchestrator | 2026-01-05 02:00:15.683062 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-01-05 02:00:15.683068 | orchestrator | Monday 05 January 2026 01:59:43 +0000 (0:00:02.362) 0:01:12.112 ******** 2026-01-05 02:00:15.683074 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:00:15.683080 | orchestrator | 2026-01-05 02:00:15.683087 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-01-05 02:00:15.683094 | orchestrator | Monday 05 January 2026 01:59:55 +0000 (0:00:11.820) 0:01:23.933 ******** 2026-01-05 02:00:15.683101 | orchestrator | 2026-01-05 02:00:15.683108 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-01-05 02:00:15.683114 | orchestrator | Monday 05 January 2026 01:59:55 +0000 (0:00:00.307) 0:01:24.240 ******** 2026-01-05 02:00:15.683122 | orchestrator | 2026-01-05 02:00:15.683129 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-01-05 02:00:15.683135 | orchestrator | Monday 05 January 2026 01:59:55 +0000 (0:00:00.081) 0:01:24.322 ******** 2026-01-05 02:00:15.683142 | orchestrator | 2026-01-05 02:00:15.683147 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-01-05 02:00:15.683170 | orchestrator | Monday 05 January 2026 01:59:55 +0000 (0:00:00.076) 0:01:24.398 ******** 2026-01-05 02:00:15.683193 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:00:15.683198 | orchestrator | changed: [testbed-node-1] 2026-01-05 02:00:15.683202 | orchestrator | changed: [testbed-node-2] 2026-01-05 02:00:15.683205 | orchestrator | 2026-01-05 02:00:15.683210 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-01-05 02:00:15.683214 | orchestrator | Monday 05 January 2026 02:00:01 +0000 (0:00:06.117) 0:01:30.516 ******** 2026-01-05 02:00:15.683218 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:00:15.683222 | orchestrator | changed: [testbed-node-1] 2026-01-05 02:00:15.683226 | orchestrator | changed: [testbed-node-2] 2026-01-05 02:00:15.683230 | orchestrator | 2026-01-05 02:00:15.683234 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-01-05 02:00:15.683237 | orchestrator | Monday 05 January 2026 02:00:07 +0000 (0:00:05.501) 0:01:36.017 ******** 2026-01-05 02:00:15.683241 | orchestrator | changed: [testbed-node-1] 2026-01-05 02:00:15.683245 | orchestrator | changed: [testbed-node-2] 2026-01-05 02:00:15.683249 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:00:15.683253 | orchestrator | 2026-01-05 02:00:15.683256 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 02:00:15.683264 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-05 02:00:15.683271 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-05 02:00:15.683274 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-05 02:00:15.683278 | orchestrator | 2026-01-05 02:00:15.683282 | orchestrator | 2026-01-05 02:00:15.683297 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 02:00:15.683301 | orchestrator | Monday 05 January 2026 02:00:15 +0000 (0:00:08.197) 0:01:44.215 ******** 2026-01-05 02:00:15.683305 | orchestrator | =============================================================================== 2026-01-05 02:00:15.683309 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 16.63s 2026-01-05 02:00:15.683313 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.82s 2026-01-05 02:00:15.683317 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 8.20s 2026-01-05 02:00:15.683320 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.81s 2026-01-05 02:00:15.683324 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 6.68s 2026-01-05 02:00:15.683328 | orchestrator | barbican : Restart barbican-api container ------------------------------- 6.12s 2026-01-05 02:00:15.683332 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 5.50s 2026-01-05 02:00:15.683336 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.21s 2026-01-05 02:00:15.683339 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.07s 2026-01-05 02:00:15.683343 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.92s 2026-01-05 02:00:15.683347 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.79s 2026-01-05 02:00:15.683351 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.55s 2026-01-05 02:00:15.683355 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.45s 2026-01-05 02:00:15.683358 | orchestrator | barbican : Check barbican containers ------------------------------------ 2.57s 2026-01-05 02:00:15.683362 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.36s 2026-01-05 02:00:15.683381 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.33s 2026-01-05 02:00:15.683388 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 1.79s 2026-01-05 02:00:15.683397 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 1.59s 2026-01-05 02:00:15.683413 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.14s 2026-01-05 02:00:15.683418 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 0.98s 2026-01-05 02:00:18.081768 | orchestrator | 2026-01-05 02:00:18 | INFO  | Task 64382db6-cd8d-4c58-bd1e-1612e62e53a5 (designate) was prepared for execution. 2026-01-05 02:00:18.081846 | orchestrator | 2026-01-05 02:00:18 | INFO  | It takes a moment until task 64382db6-cd8d-4c58-bd1e-1612e62e53a5 (designate) has been started and output is visible here. 2026-01-05 02:00:51.242762 | orchestrator | 2026-01-05 02:00:51.243696 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 02:00:51.243740 | orchestrator | 2026-01-05 02:00:51.243746 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 02:00:51.243751 | orchestrator | Monday 05 January 2026 02:00:22 +0000 (0:00:00.275) 0:00:00.275 ******** 2026-01-05 02:00:51.243756 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:00:51.243761 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:00:51.243765 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:00:51.243769 | orchestrator | 2026-01-05 02:00:51.243774 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 02:00:51.243778 | orchestrator | Monday 05 January 2026 02:00:22 +0000 (0:00:00.322) 0:00:00.597 ******** 2026-01-05 02:00:51.243783 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-01-05 02:00:51.243788 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-01-05 02:00:51.243791 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-01-05 02:00:51.243795 | orchestrator | 2026-01-05 02:00:51.243799 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-01-05 02:00:51.243803 | orchestrator | 2026-01-05 02:00:51.243807 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-05 02:00:51.243810 | orchestrator | Monday 05 January 2026 02:00:23 +0000 (0:00:00.451) 0:00:01.049 ******** 2026-01-05 02:00:51.243815 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 02:00:51.243823 | orchestrator | 2026-01-05 02:00:51.243830 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-01-05 02:00:51.243838 | orchestrator | Monday 05 January 2026 02:00:23 +0000 (0:00:00.570) 0:00:01.619 ******** 2026-01-05 02:00:51.243847 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-01-05 02:00:51.243852 | orchestrator | 2026-01-05 02:00:51.243858 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-01-05 02:00:51.243864 | orchestrator | Monday 05 January 2026 02:00:27 +0000 (0:00:03.803) 0:00:05.423 ******** 2026-01-05 02:00:51.243870 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-01-05 02:00:51.243877 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-01-05 02:00:51.243884 | orchestrator | 2026-01-05 02:00:51.243890 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-01-05 02:00:51.243896 | orchestrator | Monday 05 January 2026 02:00:34 +0000 (0:00:06.703) 0:00:12.126 ******** 2026-01-05 02:00:51.243903 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-05 02:00:51.243909 | orchestrator | 2026-01-05 02:00:51.243917 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-01-05 02:00:51.243939 | orchestrator | Monday 05 January 2026 02:00:37 +0000 (0:00:03.338) 0:00:15.464 ******** 2026-01-05 02:00:51.243947 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-05 02:00:51.243954 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-01-05 02:00:51.243959 | orchestrator | 2026-01-05 02:00:51.243964 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-01-05 02:00:51.243986 | orchestrator | Monday 05 January 2026 02:00:41 +0000 (0:00:04.184) 0:00:19.649 ******** 2026-01-05 02:00:51.243991 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-05 02:00:51.243996 | orchestrator | 2026-01-05 02:00:51.244000 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-01-05 02:00:51.244005 | orchestrator | Monday 05 January 2026 02:00:45 +0000 (0:00:03.338) 0:00:22.988 ******** 2026-01-05 02:00:51.244010 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-01-05 02:00:51.244014 | orchestrator | 2026-01-05 02:00:51.244019 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-01-05 02:00:51.244023 | orchestrator | Monday 05 January 2026 02:00:49 +0000 (0:00:04.050) 0:00:27.038 ******** 2026-01-05 02:00:51.244036 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-05 02:00:51.244065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-05 02:00:51.244070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-05 02:00:51.244131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-05 02:00:51.244144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-05 02:00:51.244150 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-05 02:00:51.244156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-05 02:00:51.244169 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-05 02:00:57.811519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-05 02:00:57.811625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-05 02:00:57.811672 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-05 02:00:57.811695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-05 02:00:57.811700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-05 02:00:57.811706 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-05 02:00:57.811723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-05 02:00:57.811728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-05 02:00:57.811734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-05 02:00:57.811746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-05 02:00:57.811752 | orchestrator | 2026-01-05 02:00:57.811758 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-01-05 02:00:57.811764 | orchestrator | Monday 05 January 2026 02:00:52 +0000 (0:00:02.973) 0:00:30.012 ******** 2026-01-05 02:00:57.811770 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:00:57.811776 | orchestrator | 2026-01-05 02:00:57.811781 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-01-05 02:00:57.811786 | orchestrator | Monday 05 January 2026 02:00:52 +0000 (0:00:00.142) 0:00:30.155 ******** 2026-01-05 02:00:57.811791 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:00:57.811796 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:00:57.811800 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:00:57.811805 | orchestrator | 2026-01-05 02:00:57.811810 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-05 02:00:57.811815 | orchestrator | Monday 05 January 2026 02:00:52 +0000 (0:00:00.518) 0:00:30.673 ******** 2026-01-05 02:00:57.811820 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 02:00:57.811825 | orchestrator | 2026-01-05 02:00:57.811830 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-01-05 02:00:57.811835 | orchestrator | Monday 05 January 2026 02:00:53 +0000 (0:00:00.563) 0:00:31.237 ******** 2026-01-05 02:00:57.811841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-05 02:00:57.811854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-05 02:00:59.812474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-05 02:00:59.812581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-05 02:00:59.812591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-05 02:00:59.812597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-05 02:00:59.812605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-05 02:00:59.812625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-05 02:00:59.812651 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-05 02:00:59.812661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-05 02:00:59.812669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-05 02:00:59.812676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-05 02:00:59.812682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-05 02:00:59.812689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-05 02:00:59.812707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-05 02:01:00.706318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-05 02:01:00.706410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-05 02:01:00.706417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-05 02:01:00.706423 | orchestrator | 2026-01-05 02:01:00.706428 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-01-05 02:01:00.706433 | orchestrator | Monday 05 January 2026 02:00:59 +0000 (0:00:06.459) 0:00:37.696 ******** 2026-01-05 02:01:00.706439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-05 02:01:00.706446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-05 02:01:00.706478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-05 02:01:00.706484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-05 02:01:00.706492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-05 02:01:00.706497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-05 02:01:00.706501 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:01:00.706506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-05 02:01:00.706510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-05 02:01:00.706518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-05 02:01:00.706525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-05 02:01:01.459977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-05 02:01:01.460102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-05 02:01:01.460114 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:01:01.460125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-05 02:01:01.460135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-05 02:01:01.460164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-05 02:01:01.460172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-05 02:01:01.460196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-05 02:01:01.460205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-05 02:01:01.460211 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:01:01.460218 | orchestrator | 2026-01-05 02:01:01.460225 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-01-05 02:01:01.460233 | orchestrator | Monday 05 January 2026 02:01:00 +0000 (0:00:01.012) 0:00:38.709 ******** 2026-01-05 02:01:01.460239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-05 02:01:01.460252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-05 02:01:01.460258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-05 02:01:01.460269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-05 02:01:01.828514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-05 02:01:01.828612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-05 02:01:01.828620 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:01:01.828628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-05 02:01:01.828649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-05 02:01:01.828656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-05 02:01:01.828660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-05 02:01:01.828680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-05 02:01:01.828684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-05 02:01:01.828688 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:01:01.828692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-05 02:01:01.828708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-05 02:01:01.828712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-05 02:01:01.828716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-05 02:01:01.828728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-05 02:01:06.525669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-05 02:01:06.525792 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:01:06.525812 | orchestrator | 2026-01-05 02:01:06.525825 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-01-05 02:01:06.525838 | orchestrator | Monday 05 January 2026 02:01:01 +0000 (0:00:01.006) 0:00:39.715 ******** 2026-01-05 02:01:06.525881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-05 02:01:06.525896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-05 02:01:06.525908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-05 02:01:06.525954 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-05 02:01:06.525970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-05 02:01:06.525990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-05 02:01:06.526002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-05 02:01:06.526182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-05 02:01:06.526203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-05 02:01:06.526225 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-05 02:01:06.526250 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-05 02:01:18.236610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-05 02:01:18.236717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-05 02:01:18.236727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-05 02:01:18.236734 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-05 02:01:18.236741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-05 02:01:18.236760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-05 02:01:18.236781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-05 02:01:18.236793 | orchestrator | 2026-01-05 02:01:18.236801 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-01-05 02:01:18.236808 | orchestrator | Monday 05 January 2026 02:01:08 +0000 (0:00:06.670) 0:00:46.385 ******** 2026-01-05 02:01:18.236815 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-05 02:01:18.236824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-05 02:01:18.236831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-05 02:01:18.236842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-05 02:01:18.236856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-05 02:01:26.638740 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-05 02:01:26.638905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-05 02:01:26.639004 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-05 02:01:26.639087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-05 02:01:26.639095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-05 02:01:26.639118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-05 02:01:26.639160 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-05 02:01:26.639168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-05 02:01:26.639175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-05 02:01:26.639182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-05 02:01:26.639189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-05 02:01:26.639196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-05 02:01:26.639211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-05 02:01:26.639219 | orchestrator | 2026-01-05 02:01:26.639228 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-01-05 02:01:26.639237 | orchestrator | Monday 05 January 2026 02:01:22 +0000 (0:00:14.413) 0:01:00.799 ******** 2026-01-05 02:01:26.639249 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-01-05 02:01:31.057548 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-01-05 02:01:31.057652 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-01-05 02:01:31.057664 | orchestrator | 2026-01-05 02:01:31.057671 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-01-05 02:01:31.057680 | orchestrator | Monday 05 January 2026 02:01:26 +0000 (0:00:03.727) 0:01:04.526 ******** 2026-01-05 02:01:31.057685 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-01-05 02:01:31.057690 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-01-05 02:01:31.057731 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-01-05 02:01:31.057737 | orchestrator | 2026-01-05 02:01:31.057741 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-01-05 02:01:31.057746 | orchestrator | Monday 05 January 2026 02:01:29 +0000 (0:00:02.498) 0:01:07.024 ******** 2026-01-05 02:01:31.057752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-05 02:01:31.057761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-05 02:01:31.057778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-05 02:01:31.057812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-05 02:01:31.057818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-05 02:01:31.057823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-05 02:01:31.057828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-05 02:01:31.057833 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-05 02:01:31.057837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-05 02:01:31.057848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-05 02:01:31.057857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-05 02:01:34.106797 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-05 02:01:34.106902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-05 02:01:34.106916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-05 02:01:34.106924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-05 02:01:34.106972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-05 02:01:34.106981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-05 02:01:34.107071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-05 02:01:34.107083 | orchestrator | 2026-01-05 02:01:34.107091 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-01-05 02:01:34.107100 | orchestrator | Monday 05 January 2026 02:01:32 +0000 (0:00:03.077) 0:01:10.102 ******** 2026-01-05 02:01:34.107108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-05 02:01:34.107117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-05 02:01:34.107140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-05 02:01:34.107147 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-05 02:01:34.107160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-05 02:01:35.188949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-05 02:01:35.189109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-05 02:01:35.189124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-05 02:01:35.189162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-05 02:01:35.189188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-05 02:01:35.189200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-05 02:01:35.189230 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-05 02:01:35.189240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-05 02:01:35.189251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-05 02:01:35.189268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-05 02:01:35.189278 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-05 02:01:35.189296 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-05 02:01:35.189307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-05 02:01:35.189318 | orchestrator | 2026-01-05 02:01:35.189331 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-05 02:01:35.189350 | orchestrator | Monday 05 January 2026 02:01:35 +0000 (0:00:02.967) 0:01:13.069 ******** 2026-01-05 02:01:36.221957 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:01:36.222084 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:01:36.222091 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:01:36.222096 | orchestrator | 2026-01-05 02:01:36.222102 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-01-05 02:01:36.222107 | orchestrator | Monday 05 January 2026 02:01:35 +0000 (0:00:00.362) 0:01:13.432 ******** 2026-01-05 02:01:36.222114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-05 02:01:36.222140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-05 02:01:36.222146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-05 02:01:36.222163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-05 02:01:36.222169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-05 02:01:36.222186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-05 02:01:36.222190 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:01:36.222195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-05 02:01:36.222208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-05 02:01:36.222212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-05 02:01:36.222219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-05 02:01:36.222223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-05 02:01:36.222230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-05 02:01:39.764212 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:01:39.764299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-05 02:01:39.764334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-05 02:01:39.764346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-05 02:01:39.764369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-05 02:01:39.764380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-05 02:01:39.764391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-05 02:01:39.764401 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:01:39.764411 | orchestrator | 2026-01-05 02:01:39.764435 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-01-05 02:01:39.764447 | orchestrator | Monday 05 January 2026 02:01:36 +0000 (0:00:00.814) 0:01:14.246 ******** 2026-01-05 02:01:39.764464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-05 02:01:39.764476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-05 02:01:39.764491 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-05 02:01:39.764502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-05 02:01:39.764517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-05 02:01:41.544237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-05 02:01:41.544310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-05 02:01:41.544320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-05 02:01:41.544327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-05 02:01:41.544348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-05 02:01:41.544356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-05 02:01:41.544372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-05 02:01:41.544390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-05 02:01:41.544396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-05 02:01:41.544402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-05 02:01:41.544410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-05 02:01:41.544416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-05 02:01:41.544422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-05 02:01:41.544431 | orchestrator | 2026-01-05 02:01:41.544438 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-05 02:01:41.544445 | orchestrator | Monday 05 January 2026 02:01:40 +0000 (0:00:04.614) 0:01:18.861 ******** 2026-01-05 02:01:41.544451 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:01:41.544461 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:02:56.607305 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:02:56.607390 | orchestrator | 2026-01-05 02:02:56.607397 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-01-05 02:02:56.607404 | orchestrator | Monday 05 January 2026 02:01:41 +0000 (0:00:00.569) 0:01:19.431 ******** 2026-01-05 02:02:56.607409 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-01-05 02:02:56.607414 | orchestrator | 2026-01-05 02:02:56.607418 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-01-05 02:02:56.607422 | orchestrator | Monday 05 January 2026 02:01:43 +0000 (0:00:01.975) 0:01:21.407 ******** 2026-01-05 02:02:56.607426 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-05 02:02:56.607431 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-01-05 02:02:56.607434 | orchestrator | 2026-01-05 02:02:56.607438 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-01-05 02:02:56.607442 | orchestrator | Monday 05 January 2026 02:01:45 +0000 (0:00:02.419) 0:01:23.827 ******** 2026-01-05 02:02:56.607447 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:02:56.607450 | orchestrator | 2026-01-05 02:02:56.607454 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-01-05 02:02:56.607458 | orchestrator | Monday 05 January 2026 02:02:02 +0000 (0:00:16.853) 0:01:40.680 ******** 2026-01-05 02:02:56.607462 | orchestrator | 2026-01-05 02:02:56.607466 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-01-05 02:02:56.607469 | orchestrator | Monday 05 January 2026 02:02:02 +0000 (0:00:00.074) 0:01:40.754 ******** 2026-01-05 02:02:56.607473 | orchestrator | 2026-01-05 02:02:56.607477 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-01-05 02:02:56.607481 | orchestrator | Monday 05 January 2026 02:02:02 +0000 (0:00:00.068) 0:01:40.822 ******** 2026-01-05 02:02:56.607485 | orchestrator | 2026-01-05 02:02:56.607488 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-01-05 02:02:56.607492 | orchestrator | Monday 05 January 2026 02:02:03 +0000 (0:00:00.075) 0:01:40.898 ******** 2026-01-05 02:02:56.607496 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:02:56.607500 | orchestrator | changed: [testbed-node-1] 2026-01-05 02:02:56.607504 | orchestrator | changed: [testbed-node-2] 2026-01-05 02:02:56.607507 | orchestrator | 2026-01-05 02:02:56.607511 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-01-05 02:02:56.607515 | orchestrator | Monday 05 January 2026 02:02:10 +0000 (0:00:07.371) 0:01:48.270 ******** 2026-01-05 02:02:56.607519 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:02:56.607523 | orchestrator | changed: [testbed-node-1] 2026-01-05 02:02:56.607526 | orchestrator | changed: [testbed-node-2] 2026-01-05 02:02:56.607530 | orchestrator | 2026-01-05 02:02:56.607534 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-01-05 02:02:56.607538 | orchestrator | Monday 05 January 2026 02:02:16 +0000 (0:00:05.722) 0:01:53.992 ******** 2026-01-05 02:02:56.607542 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:02:56.607545 | orchestrator | changed: [testbed-node-1] 2026-01-05 02:02:56.607549 | orchestrator | changed: [testbed-node-2] 2026-01-05 02:02:56.607553 | orchestrator | 2026-01-05 02:02:56.607557 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-01-05 02:02:56.607561 | orchestrator | Monday 05 January 2026 02:02:22 +0000 (0:00:05.966) 0:01:59.959 ******** 2026-01-05 02:02:56.607584 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:02:56.607590 | orchestrator | changed: [testbed-node-1] 2026-01-05 02:02:56.607596 | orchestrator | changed: [testbed-node-2] 2026-01-05 02:02:56.607621 | orchestrator | 2026-01-05 02:02:56.607634 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-01-05 02:02:56.607641 | orchestrator | Monday 05 January 2026 02:02:32 +0000 (0:00:10.118) 0:02:10.078 ******** 2026-01-05 02:02:56.607645 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:02:56.607649 | orchestrator | changed: [testbed-node-2] 2026-01-05 02:02:56.607652 | orchestrator | changed: [testbed-node-1] 2026-01-05 02:02:56.607656 | orchestrator | 2026-01-05 02:02:56.607670 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-01-05 02:02:56.607674 | orchestrator | Monday 05 January 2026 02:02:37 +0000 (0:00:05.571) 0:02:15.649 ******** 2026-01-05 02:02:56.607679 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:02:56.607682 | orchestrator | changed: [testbed-node-2] 2026-01-05 02:02:56.607687 | orchestrator | changed: [testbed-node-1] 2026-01-05 02:02:56.607693 | orchestrator | 2026-01-05 02:02:56.607700 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-01-05 02:02:56.607704 | orchestrator | Monday 05 January 2026 02:02:48 +0000 (0:00:11.176) 0:02:26.826 ******** 2026-01-05 02:02:56.607708 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:02:56.607712 | orchestrator | 2026-01-05 02:02:56.607716 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 02:02:56.607721 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-05 02:02:56.607727 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-05 02:02:56.607731 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-05 02:02:56.607735 | orchestrator | 2026-01-05 02:02:56.607738 | orchestrator | 2026-01-05 02:02:56.607742 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 02:02:56.607746 | orchestrator | Monday 05 January 2026 02:02:56 +0000 (0:00:07.284) 0:02:34.111 ******** 2026-01-05 02:02:56.607750 | orchestrator | =============================================================================== 2026-01-05 02:02:56.607754 | orchestrator | designate : Running Designate bootstrap container ---------------------- 16.85s 2026-01-05 02:02:56.607758 | orchestrator | designate : Copying over designate.conf -------------------------------- 14.41s 2026-01-05 02:02:56.607773 | orchestrator | designate : Restart designate-worker container ------------------------- 11.18s 2026-01-05 02:02:56.607777 | orchestrator | designate : Restart designate-producer container ----------------------- 10.12s 2026-01-05 02:02:56.607781 | orchestrator | designate : Restart designate-backend-bind9 container ------------------- 7.37s 2026-01-05 02:02:56.607785 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.28s 2026-01-05 02:02:56.607789 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.70s 2026-01-05 02:02:56.607792 | orchestrator | designate : Copying over config.json files for services ----------------- 6.67s 2026-01-05 02:02:56.607796 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.46s 2026-01-05 02:02:56.607800 | orchestrator | designate : Restart designate-central container ------------------------- 5.97s 2026-01-05 02:02:56.607804 | orchestrator | designate : Restart designate-api container ----------------------------- 5.72s 2026-01-05 02:02:56.607807 | orchestrator | designate : Restart designate-mdns container ---------------------------- 5.57s 2026-01-05 02:02:56.607811 | orchestrator | designate : Check designate containers ---------------------------------- 4.61s 2026-01-05 02:02:56.607815 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.18s 2026-01-05 02:02:56.607819 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.05s 2026-01-05 02:02:56.607827 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.80s 2026-01-05 02:02:56.607831 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 3.73s 2026-01-05 02:02:56.607835 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.34s 2026-01-05 02:02:56.607838 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.34s 2026-01-05 02:02:56.607842 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 3.08s 2026-01-05 02:02:58.970291 | orchestrator | 2026-01-05 02:02:58 | INFO  | Task 20131501-2f74-4078-9ee0-388a44d1bf21 (octavia) was prepared for execution. 2026-01-05 02:02:58.970388 | orchestrator | 2026-01-05 02:02:58 | INFO  | It takes a moment until task 20131501-2f74-4078-9ee0-388a44d1bf21 (octavia) has been started and output is visible here. 2026-01-05 02:05:11.872408 | orchestrator | 2026-01-05 02:05:11.872576 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 02:05:11.872609 | orchestrator | 2026-01-05 02:05:11.872620 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 02:05:11.872629 | orchestrator | Monday 05 January 2026 02:03:02 +0000 (0:00:00.257) 0:00:00.257 ******** 2026-01-05 02:05:11.872638 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:05:11.872648 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:05:11.872705 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:05:11.872714 | orchestrator | 2026-01-05 02:05:11.872722 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 02:05:11.872730 | orchestrator | Monday 05 January 2026 02:03:03 +0000 (0:00:00.316) 0:00:00.574 ******** 2026-01-05 02:05:11.872739 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-01-05 02:05:11.872747 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-01-05 02:05:11.872755 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-01-05 02:05:11.872763 | orchestrator | 2026-01-05 02:05:11.872771 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-01-05 02:05:11.872779 | orchestrator | 2026-01-05 02:05:11.872787 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-05 02:05:11.872810 | orchestrator | Monday 05 January 2026 02:03:03 +0000 (0:00:00.444) 0:00:01.018 ******** 2026-01-05 02:05:11.872819 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 02:05:11.872829 | orchestrator | 2026-01-05 02:05:11.872837 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-01-05 02:05:11.872845 | orchestrator | Monday 05 January 2026 02:03:04 +0000 (0:00:00.570) 0:00:01.588 ******** 2026-01-05 02:05:11.872854 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-01-05 02:05:11.872862 | orchestrator | 2026-01-05 02:05:11.872870 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-01-05 02:05:11.872878 | orchestrator | Monday 05 January 2026 02:03:08 +0000 (0:00:03.741) 0:00:05.330 ******** 2026-01-05 02:05:11.872886 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-01-05 02:05:11.872895 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-01-05 02:05:11.872903 | orchestrator | 2026-01-05 02:05:11.872911 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-01-05 02:05:11.872919 | orchestrator | Monday 05 January 2026 02:03:14 +0000 (0:00:06.985) 0:00:12.316 ******** 2026-01-05 02:05:11.872927 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-05 02:05:11.872935 | orchestrator | 2026-01-05 02:05:11.872943 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-01-05 02:05:11.872953 | orchestrator | Monday 05 January 2026 02:03:18 +0000 (0:00:03.680) 0:00:15.997 ******** 2026-01-05 02:05:11.872963 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-05 02:05:11.872992 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-01-05 02:05:11.873001 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-01-05 02:05:11.873011 | orchestrator | 2026-01-05 02:05:11.873020 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-01-05 02:05:11.873030 | orchestrator | Monday 05 January 2026 02:03:27 +0000 (0:00:08.478) 0:00:24.476 ******** 2026-01-05 02:05:11.873040 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-05 02:05:11.873049 | orchestrator | 2026-01-05 02:05:11.873059 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-01-05 02:05:11.873069 | orchestrator | Monday 05 January 2026 02:03:30 +0000 (0:00:03.068) 0:00:27.545 ******** 2026-01-05 02:05:11.873079 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-01-05 02:05:11.873088 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-01-05 02:05:11.873097 | orchestrator | 2026-01-05 02:05:11.873105 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-01-05 02:05:11.873113 | orchestrator | Monday 05 January 2026 02:03:38 +0000 (0:00:07.822) 0:00:35.368 ******** 2026-01-05 02:05:11.873121 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-01-05 02:05:11.873128 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-01-05 02:05:11.873136 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-01-05 02:05:11.873144 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-01-05 02:05:11.873152 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-01-05 02:05:11.873160 | orchestrator | 2026-01-05 02:05:11.873168 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-05 02:05:11.873176 | orchestrator | Monday 05 January 2026 02:03:54 +0000 (0:00:16.651) 0:00:52.019 ******** 2026-01-05 02:05:11.873184 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 02:05:11.873192 | orchestrator | 2026-01-05 02:05:11.873200 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-01-05 02:05:11.873207 | orchestrator | Monday 05 January 2026 02:03:55 +0000 (0:00:00.787) 0:00:52.807 ******** 2026-01-05 02:05:11.873215 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:05:11.873223 | orchestrator | 2026-01-05 02:05:11.873236 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-01-05 02:05:11.873245 | orchestrator | Monday 05 January 2026 02:04:00 +0000 (0:00:04.913) 0:00:57.720 ******** 2026-01-05 02:05:11.873253 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:05:11.873261 | orchestrator | 2026-01-05 02:05:11.873269 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-01-05 02:05:11.873294 | orchestrator | Monday 05 January 2026 02:04:04 +0000 (0:00:04.593) 0:01:02.314 ******** 2026-01-05 02:05:11.873303 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:05:11.873310 | orchestrator | 2026-01-05 02:05:11.873318 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-01-05 02:05:11.873326 | orchestrator | Monday 05 January 2026 02:04:08 +0000 (0:00:03.471) 0:01:05.786 ******** 2026-01-05 02:05:11.873334 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-01-05 02:05:11.873342 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-01-05 02:05:11.873350 | orchestrator | 2026-01-05 02:05:11.873359 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-01-05 02:05:11.873366 | orchestrator | Monday 05 January 2026 02:04:18 +0000 (0:00:10.384) 0:01:16.170 ******** 2026-01-05 02:05:11.873374 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-01-05 02:05:11.873383 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-01-05 02:05:11.873399 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-01-05 02:05:11.873414 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-01-05 02:05:11.873422 | orchestrator | 2026-01-05 02:05:11.873430 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-01-05 02:05:11.873438 | orchestrator | Monday 05 January 2026 02:04:36 +0000 (0:00:17.605) 0:01:33.775 ******** 2026-01-05 02:05:11.873446 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:05:11.873454 | orchestrator | 2026-01-05 02:05:11.873462 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-01-05 02:05:11.873470 | orchestrator | Monday 05 January 2026 02:04:41 +0000 (0:00:04.741) 0:01:38.516 ******** 2026-01-05 02:05:11.873478 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:05:11.873486 | orchestrator | 2026-01-05 02:05:11.873494 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-01-05 02:05:11.873502 | orchestrator | Monday 05 January 2026 02:04:47 +0000 (0:00:05.962) 0:01:44.479 ******** 2026-01-05 02:05:11.873510 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:05:11.873518 | orchestrator | 2026-01-05 02:05:11.873526 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-01-05 02:05:11.873534 | orchestrator | Monday 05 January 2026 02:04:47 +0000 (0:00:00.216) 0:01:44.696 ******** 2026-01-05 02:05:11.873542 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:05:11.873550 | orchestrator | 2026-01-05 02:05:11.873558 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-05 02:05:11.873566 | orchestrator | Monday 05 January 2026 02:04:51 +0000 (0:00:04.608) 0:01:49.304 ******** 2026-01-05 02:05:11.873574 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 02:05:11.873582 | orchestrator | 2026-01-05 02:05:11.873590 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-01-05 02:05:11.873598 | orchestrator | Monday 05 January 2026 02:04:53 +0000 (0:00:01.145) 0:01:50.449 ******** 2026-01-05 02:05:11.873606 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:05:11.873614 | orchestrator | changed: [testbed-node-2] 2026-01-05 02:05:11.873622 | orchestrator | changed: [testbed-node-1] 2026-01-05 02:05:11.873630 | orchestrator | 2026-01-05 02:05:11.873638 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-01-05 02:05:11.873646 | orchestrator | Monday 05 January 2026 02:04:58 +0000 (0:00:05.660) 0:01:56.110 ******** 2026-01-05 02:05:11.873707 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:05:11.873716 | orchestrator | changed: [testbed-node-1] 2026-01-05 02:05:11.873724 | orchestrator | changed: [testbed-node-2] 2026-01-05 02:05:11.873731 | orchestrator | 2026-01-05 02:05:11.873740 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-01-05 02:05:11.873748 | orchestrator | Monday 05 January 2026 02:05:03 +0000 (0:00:05.084) 0:02:01.194 ******** 2026-01-05 02:05:11.873755 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:05:11.873764 | orchestrator | changed: [testbed-node-1] 2026-01-05 02:05:11.873771 | orchestrator | changed: [testbed-node-2] 2026-01-05 02:05:11.873779 | orchestrator | 2026-01-05 02:05:11.873787 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-01-05 02:05:11.873795 | orchestrator | Monday 05 January 2026 02:05:04 +0000 (0:00:01.089) 0:02:02.284 ******** 2026-01-05 02:05:11.873803 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:05:11.873811 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:05:11.873819 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:05:11.873827 | orchestrator | 2026-01-05 02:05:11.873835 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-01-05 02:05:11.873843 | orchestrator | Monday 05 January 2026 02:05:07 +0000 (0:00:02.059) 0:02:04.343 ******** 2026-01-05 02:05:11.873851 | orchestrator | changed: [testbed-node-2] 2026-01-05 02:05:11.873866 | orchestrator | changed: [testbed-node-1] 2026-01-05 02:05:11.873874 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:05:11.873882 | orchestrator | 2026-01-05 02:05:11.873890 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-01-05 02:05:11.873898 | orchestrator | Monday 05 January 2026 02:05:08 +0000 (0:00:01.296) 0:02:05.639 ******** 2026-01-05 02:05:11.873906 | orchestrator | changed: [testbed-node-1] 2026-01-05 02:05:11.873914 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:05:11.873921 | orchestrator | changed: [testbed-node-2] 2026-01-05 02:05:11.873929 | orchestrator | 2026-01-05 02:05:11.873937 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-01-05 02:05:11.873945 | orchestrator | Monday 05 January 2026 02:05:09 +0000 (0:00:01.245) 0:02:06.885 ******** 2026-01-05 02:05:11.873953 | orchestrator | changed: [testbed-node-1] 2026-01-05 02:05:11.873961 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:05:11.873969 | orchestrator | changed: [testbed-node-2] 2026-01-05 02:05:11.873977 | orchestrator | 2026-01-05 02:05:11.873991 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-01-05 02:05:38.721291 | orchestrator | Monday 05 January 2026 02:05:11 +0000 (0:00:02.291) 0:02:09.176 ******** 2026-01-05 02:05:38.721392 | orchestrator | changed: [testbed-node-1] 2026-01-05 02:05:38.721404 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:05:38.721412 | orchestrator | changed: [testbed-node-2] 2026-01-05 02:05:38.721419 | orchestrator | 2026-01-05 02:05:38.721427 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-01-05 02:05:38.721435 | orchestrator | Monday 05 January 2026 02:05:13 +0000 (0:00:01.840) 0:02:11.017 ******** 2026-01-05 02:05:38.721442 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:05:38.721450 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:05:38.721457 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:05:38.721464 | orchestrator | 2026-01-05 02:05:38.721472 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-01-05 02:05:38.721479 | orchestrator | Monday 05 January 2026 02:05:14 +0000 (0:00:00.766) 0:02:11.783 ******** 2026-01-05 02:05:38.721486 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:05:38.721493 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:05:38.721499 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:05:38.721506 | orchestrator | 2026-01-05 02:05:38.721513 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-05 02:05:38.721519 | orchestrator | Monday 05 January 2026 02:05:17 +0000 (0:00:03.082) 0:02:14.865 ******** 2026-01-05 02:05:38.721541 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 02:05:38.721549 | orchestrator | 2026-01-05 02:05:38.721556 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-01-05 02:05:38.721563 | orchestrator | Monday 05 January 2026 02:05:18 +0000 (0:00:00.539) 0:02:15.404 ******** 2026-01-05 02:05:38.721570 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:05:38.721576 | orchestrator | 2026-01-05 02:05:38.721583 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-01-05 02:05:38.721590 | orchestrator | Monday 05 January 2026 02:05:21 +0000 (0:00:03.759) 0:02:19.164 ******** 2026-01-05 02:05:38.721597 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:05:38.721603 | orchestrator | 2026-01-05 02:05:38.721610 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-01-05 02:05:38.721656 | orchestrator | Monday 05 January 2026 02:05:25 +0000 (0:00:03.487) 0:02:22.651 ******** 2026-01-05 02:05:38.721663 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-01-05 02:05:38.721670 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-01-05 02:05:38.721679 | orchestrator | 2026-01-05 02:05:38.721690 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-01-05 02:05:38.721701 | orchestrator | Monday 05 January 2026 02:05:32 +0000 (0:00:07.229) 0:02:29.880 ******** 2026-01-05 02:05:38.721741 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:05:38.721757 | orchestrator | 2026-01-05 02:05:38.721767 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-01-05 02:05:38.721779 | orchestrator | Monday 05 January 2026 02:05:36 +0000 (0:00:03.555) 0:02:33.436 ******** 2026-01-05 02:05:38.721789 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:05:38.721800 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:05:38.721810 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:05:38.721820 | orchestrator | 2026-01-05 02:05:38.721831 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-01-05 02:05:38.721842 | orchestrator | Monday 05 January 2026 02:05:36 +0000 (0:00:00.530) 0:02:33.966 ******** 2026-01-05 02:05:38.721858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-05 02:05:38.721894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-05 02:05:38.721916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-05 02:05:38.721930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-05 02:05:38.721958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-05 02:05:38.721967 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-05 02:05:38.721977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-05 02:05:38.721987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-05 02:05:38.722003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-05 02:05:40.206901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-05 02:05:40.207003 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-05 02:05:40.207041 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-05 02:05:40.207052 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-05 02:05:40.207062 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-05 02:05:40.207070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-05 02:05:40.207078 | orchestrator | 2026-01-05 02:05:40.207088 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-01-05 02:05:40.207097 | orchestrator | Monday 05 January 2026 02:05:39 +0000 (0:00:02.542) 0:02:36.509 ******** 2026-01-05 02:05:40.207104 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:05:40.207113 | orchestrator | 2026-01-05 02:05:40.207120 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-01-05 02:05:40.207128 | orchestrator | Monday 05 January 2026 02:05:39 +0000 (0:00:00.138) 0:02:36.647 ******** 2026-01-05 02:05:40.207135 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:05:40.207159 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:05:40.207167 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:05:40.207175 | orchestrator | 2026-01-05 02:05:40.207183 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-01-05 02:05:40.207190 | orchestrator | Monday 05 January 2026 02:05:39 +0000 (0:00:00.306) 0:02:36.953 ******** 2026-01-05 02:05:40.207207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-05 02:05:40.207225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-05 02:05:40.207234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-05 02:05:40.207243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-05 02:05:40.207251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-05 02:05:40.207258 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:05:40.207275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-05 02:05:45.272293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-05 02:05:45.272416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-05 02:05:45.272433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-05 02:05:45.272444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-05 02:05:45.272451 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:05:45.272459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-05 02:05:45.272466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-05 02:05:45.272510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-05 02:05:45.272517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-05 02:05:45.272522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-05 02:05:45.272527 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:05:45.272533 | orchestrator | 2026-01-05 02:05:45.272539 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-05 02:05:45.272546 | orchestrator | Monday 05 January 2026 02:05:40 +0000 (0:00:00.680) 0:02:37.634 ******** 2026-01-05 02:05:45.272552 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 02:05:45.272559 | orchestrator | 2026-01-05 02:05:45.272568 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-01-05 02:05:45.272575 | orchestrator | Monday 05 January 2026 02:05:41 +0000 (0:00:00.721) 0:02:38.355 ******** 2026-01-05 02:05:45.272589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-05 02:05:45.272600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-05 02:05:45.272672 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-05 02:05:46.875319 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-05 02:05:46.875454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-05 02:05:46.875480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-05 02:05:46.875500 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-05 02:05:46.875554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-05 02:05:46.875588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-05 02:05:46.875694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-05 02:05:46.875707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-05 02:05:46.875717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-05 02:05:46.875728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-05 02:05:46.875740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-05 02:05:46.875759 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-05 02:05:46.875770 | orchestrator | 2026-01-05 02:05:46.875782 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-01-05 02:05:46.875799 | orchestrator | Monday 05 January 2026 02:05:46 +0000 (0:00:05.277) 0:02:43.633 ******** 2026-01-05 02:05:46.875819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-05 02:05:46.986401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-05 02:05:46.986539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-05 02:05:46.986676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-05 02:05:46.986733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-05 02:05:46.986754 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:05:46.986794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-05 02:05:46.986814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-05 02:05:46.986862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-05 02:05:46.986884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-05 02:05:46.986902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-05 02:05:46.986932 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:05:46.986952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-05 02:05:46.986996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-05 02:05:46.987018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-05 02:05:46.987052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-05 02:05:47.803742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-05 02:05:47.803858 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:05:47.803876 | orchestrator | 2026-01-05 02:05:47.803889 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-01-05 02:05:47.803901 | orchestrator | Monday 05 January 2026 02:05:46 +0000 (0:00:00.677) 0:02:44.310 ******** 2026-01-05 02:05:47.803944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-05 02:05:47.803959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-05 02:05:47.803989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-05 02:05:47.804003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-05 02:05:47.804035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-05 02:05:47.804048 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:05:47.804060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-05 02:05:47.804080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-05 02:05:47.804092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-05 02:05:47.804109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-05 02:05:47.804121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-05 02:05:47.804135 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:05:47.804157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-05 02:05:52.608164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-05 02:05:52.608297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-05 02:05:52.608314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-05 02:05:52.608324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-05 02:05:52.608334 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:05:52.608343 | orchestrator | 2026-01-05 02:05:52.608367 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-01-05 02:05:52.608378 | orchestrator | Monday 05 January 2026 02:05:48 +0000 (0:00:01.337) 0:02:45.648 ******** 2026-01-05 02:05:52.608387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-05 02:05:52.608413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-05 02:05:52.608429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-05 02:05:52.608438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-05 02:05:52.608446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-05 02:05:52.608459 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-05 02:05:52.608464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-05 02:05:52.608473 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-05 02:06:08.888534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-05 02:06:08.888702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-05 02:06:08.888715 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-05 02:06:08.888737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-05 02:06:08.888744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-05 02:06:08.888752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-05 02:06:08.888796 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-05 02:06:08.888803 | orchestrator | 2026-01-05 02:06:08.888811 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-01-05 02:06:08.888818 | orchestrator | Monday 05 January 2026 02:05:53 +0000 (0:00:05.314) 0:02:50.962 ******** 2026-01-05 02:06:08.888825 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-01-05 02:06:08.888833 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-01-05 02:06:08.888838 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-01-05 02:06:08.888844 | orchestrator | 2026-01-05 02:06:08.888851 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-01-05 02:06:08.888857 | orchestrator | Monday 05 January 2026 02:05:55 +0000 (0:00:01.752) 0:02:52.714 ******** 2026-01-05 02:06:08.888865 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-05 02:06:08.888876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-05 02:06:08.888882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-05 02:06:08.888899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-05 02:06:24.904332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-05 02:06:24.904423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-05 02:06:24.904433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-05 02:06:24.904455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-05 02:06:24.904461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-05 02:06:24.904486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-05 02:06:24.904505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-05 02:06:24.904512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-05 02:06:24.904521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-05 02:06:24.904532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-05 02:06:24.904544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-05 02:06:24.904636 | orchestrator | 2026-01-05 02:06:24.904652 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-01-05 02:06:24.904673 | orchestrator | Monday 05 January 2026 02:06:12 +0000 (0:00:16.723) 0:03:09.437 ******** 2026-01-05 02:06:24.904683 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:06:24.904693 | orchestrator | changed: [testbed-node-1] 2026-01-05 02:06:24.904701 | orchestrator | changed: [testbed-node-2] 2026-01-05 02:06:24.904706 | orchestrator | 2026-01-05 02:06:24.904712 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-01-05 02:06:24.904718 | orchestrator | Monday 05 January 2026 02:06:14 +0000 (0:00:02.050) 0:03:11.488 ******** 2026-01-05 02:06:24.904723 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-01-05 02:06:24.904729 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-01-05 02:06:24.904734 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-01-05 02:06:24.904739 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-01-05 02:06:24.904745 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-01-05 02:06:24.904750 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-01-05 02:06:24.904756 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-01-05 02:06:24.904761 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-01-05 02:06:24.904766 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-01-05 02:06:24.904772 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-01-05 02:06:24.904777 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-01-05 02:06:24.904783 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-01-05 02:06:24.904788 | orchestrator | 2026-01-05 02:06:24.904793 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-01-05 02:06:24.904799 | orchestrator | Monday 05 January 2026 02:06:19 +0000 (0:00:05.317) 0:03:16.805 ******** 2026-01-05 02:06:24.904804 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-01-05 02:06:24.904810 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-01-05 02:06:24.904823 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-01-05 02:06:33.831932 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-01-05 02:06:33.832040 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-01-05 02:06:33.832048 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-01-05 02:06:33.832054 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-01-05 02:06:33.832061 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-01-05 02:06:33.832068 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-01-05 02:06:33.832075 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-01-05 02:06:33.832081 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-01-05 02:06:33.832088 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-01-05 02:06:33.832094 | orchestrator | 2026-01-05 02:06:33.832101 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-01-05 02:06:33.832110 | orchestrator | Monday 05 January 2026 02:06:24 +0000 (0:00:05.416) 0:03:22.221 ******** 2026-01-05 02:06:33.832117 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-01-05 02:06:33.832126 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-01-05 02:06:33.832132 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-01-05 02:06:33.832139 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-01-05 02:06:33.832146 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-01-05 02:06:33.832152 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-01-05 02:06:33.832158 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-01-05 02:06:33.832165 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-01-05 02:06:33.832195 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-01-05 02:06:33.832201 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-01-05 02:06:33.832208 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-01-05 02:06:33.832214 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-01-05 02:06:33.832237 | orchestrator | 2026-01-05 02:06:33.832244 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-01-05 02:06:33.832251 | orchestrator | Monday 05 January 2026 02:06:30 +0000 (0:00:05.485) 0:03:27.707 ******** 2026-01-05 02:06:33.832270 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-05 02:06:33.832277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-05 02:06:33.832300 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-05 02:06:33.832306 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-05 02:06:33.832323 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-05 02:06:33.832332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-05 02:06:33.832343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-05 02:06:33.832352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-05 02:06:33.832358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-05 02:06:33.832370 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-05 02:07:45.341740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-05 02:07:45.341872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-05 02:07:45.341898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-05 02:07:45.341908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-05 02:07:45.341915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-05 02:07:45.341923 | orchestrator | 2026-01-05 02:07:45.341931 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-05 02:07:45.341939 | orchestrator | Monday 05 January 2026 02:06:34 +0000 (0:00:04.190) 0:03:31.898 ******** 2026-01-05 02:07:45.341946 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:07:45.341952 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:07:45.341959 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:07:45.341965 | orchestrator | 2026-01-05 02:07:45.341972 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-01-05 02:07:45.341978 | orchestrator | Monday 05 January 2026 02:06:35 +0000 (0:00:00.527) 0:03:32.425 ******** 2026-01-05 02:07:45.341984 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:07:45.341991 | orchestrator | 2026-01-05 02:07:45.341997 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-01-05 02:07:45.342003 | orchestrator | Monday 05 January 2026 02:06:37 +0000 (0:00:02.368) 0:03:34.794 ******** 2026-01-05 02:07:45.342009 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:07:45.342146 | orchestrator | 2026-01-05 02:07:45.342156 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-01-05 02:07:45.342172 | orchestrator | Monday 05 January 2026 02:06:39 +0000 (0:00:02.270) 0:03:37.065 ******** 2026-01-05 02:07:45.342178 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:07:45.342185 | orchestrator | 2026-01-05 02:07:45.342192 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-01-05 02:07:45.342198 | orchestrator | Monday 05 January 2026 02:06:42 +0000 (0:00:02.383) 0:03:39.449 ******** 2026-01-05 02:07:45.342221 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:07:45.342229 | orchestrator | 2026-01-05 02:07:45.342236 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-01-05 02:07:45.342243 | orchestrator | Monday 05 January 2026 02:06:44 +0000 (0:00:02.383) 0:03:41.833 ******** 2026-01-05 02:07:45.342250 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:07:45.342256 | orchestrator | 2026-01-05 02:07:45.342263 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-01-05 02:07:45.342269 | orchestrator | Monday 05 January 2026 02:07:07 +0000 (0:00:22.820) 0:04:04.653 ******** 2026-01-05 02:07:45.342276 | orchestrator | 2026-01-05 02:07:45.342284 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-01-05 02:07:45.342292 | orchestrator | Monday 05 January 2026 02:07:07 +0000 (0:00:00.069) 0:04:04.722 ******** 2026-01-05 02:07:45.342298 | orchestrator | 2026-01-05 02:07:45.342306 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-01-05 02:07:45.342313 | orchestrator | Monday 05 January 2026 02:07:07 +0000 (0:00:00.077) 0:04:04.800 ******** 2026-01-05 02:07:45.342321 | orchestrator | 2026-01-05 02:07:45.342327 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-01-05 02:07:45.342335 | orchestrator | Monday 05 January 2026 02:07:07 +0000 (0:00:00.063) 0:04:04.863 ******** 2026-01-05 02:07:45.342342 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:07:45.342349 | orchestrator | changed: [testbed-node-2] 2026-01-05 02:07:45.342358 | orchestrator | changed: [testbed-node-1] 2026-01-05 02:07:45.342365 | orchestrator | 2026-01-05 02:07:45.342372 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-01-05 02:07:45.342380 | orchestrator | Monday 05 January 2026 02:07:18 +0000 (0:00:11.045) 0:04:15.909 ******** 2026-01-05 02:07:45.342387 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:07:45.342394 | orchestrator | changed: [testbed-node-2] 2026-01-05 02:07:45.342402 | orchestrator | changed: [testbed-node-1] 2026-01-05 02:07:45.342409 | orchestrator | 2026-01-05 02:07:45.342417 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-01-05 02:07:45.342425 | orchestrator | Monday 05 January 2026 02:07:25 +0000 (0:00:06.671) 0:04:22.581 ******** 2026-01-05 02:07:45.342433 | orchestrator | changed: [testbed-node-1] 2026-01-05 02:07:45.342441 | orchestrator | changed: [testbed-node-2] 2026-01-05 02:07:45.342448 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:07:45.342455 | orchestrator | 2026-01-05 02:07:45.342486 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-01-05 02:07:45.342494 | orchestrator | Monday 05 January 2026 02:07:33 +0000 (0:00:08.451) 0:04:31.032 ******** 2026-01-05 02:07:45.342500 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:07:45.342513 | orchestrator | changed: [testbed-node-1] 2026-01-05 02:07:45.342521 | orchestrator | changed: [testbed-node-2] 2026-01-05 02:07:45.342528 | orchestrator | 2026-01-05 02:07:45.342535 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-01-05 02:07:45.342544 | orchestrator | Monday 05 January 2026 02:07:39 +0000 (0:00:05.603) 0:04:36.636 ******** 2026-01-05 02:07:45.342551 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:07:45.342559 | orchestrator | changed: [testbed-node-2] 2026-01-05 02:07:45.342566 | orchestrator | changed: [testbed-node-1] 2026-01-05 02:07:45.342574 | orchestrator | 2026-01-05 02:07:45.342580 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 02:07:45.342589 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-05 02:07:45.342606 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-05 02:07:45.342614 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-05 02:07:45.342620 | orchestrator | 2026-01-05 02:07:45.342627 | orchestrator | 2026-01-05 02:07:45.342634 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 02:07:45.342641 | orchestrator | Monday 05 January 2026 02:07:45 +0000 (0:00:06.009) 0:04:42.645 ******** 2026-01-05 02:07:45.342648 | orchestrator | =============================================================================== 2026-01-05 02:07:45.342655 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 22.82s 2026-01-05 02:07:45.342662 | orchestrator | octavia : Add rules for security groups -------------------------------- 17.61s 2026-01-05 02:07:45.342669 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 16.72s 2026-01-05 02:07:45.342676 | orchestrator | octavia : Adding octavia related roles --------------------------------- 16.65s 2026-01-05 02:07:45.342682 | orchestrator | octavia : Restart octavia-api container -------------------------------- 11.05s 2026-01-05 02:07:45.342690 | orchestrator | octavia : Create security groups for octavia --------------------------- 10.38s 2026-01-05 02:07:45.342696 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.48s 2026-01-05 02:07:45.342704 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 8.45s 2026-01-05 02:07:45.342711 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.82s 2026-01-05 02:07:45.342717 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.23s 2026-01-05 02:07:45.342724 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.99s 2026-01-05 02:07:45.342731 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 6.67s 2026-01-05 02:07:45.342738 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 6.01s 2026-01-05 02:07:45.342744 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.96s 2026-01-05 02:07:45.342759 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.66s 2026-01-05 02:07:45.653177 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 5.60s 2026-01-05 02:07:45.653260 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.49s 2026-01-05 02:07:45.653267 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.42s 2026-01-05 02:07:45.653271 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.32s 2026-01-05 02:07:45.653275 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.31s 2026-01-05 02:07:47.640209 | orchestrator | 2026-01-05 02:07:47 | INFO  | Task 91c54a4f-1aed-42ea-8fd6-cc08ba374c46 (ceilometer) was prepared for execution. 2026-01-05 02:07:47.640286 | orchestrator | 2026-01-05 02:07:47 | INFO  | It takes a moment until task 91c54a4f-1aed-42ea-8fd6-cc08ba374c46 (ceilometer) has been started and output is visible here. 2026-01-05 02:08:11.712387 | orchestrator | 2026-01-05 02:08:11.712535 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 02:08:11.712564 | orchestrator | 2026-01-05 02:08:11.712572 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 02:08:11.712580 | orchestrator | Monday 05 January 2026 02:07:51 +0000 (0:00:00.235) 0:00:00.235 ******** 2026-01-05 02:08:11.712586 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:08:11.712595 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:08:11.712601 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:08:11.712608 | orchestrator | ok: [testbed-node-3] 2026-01-05 02:08:11.712615 | orchestrator | ok: [testbed-node-4] 2026-01-05 02:08:11.712643 | orchestrator | ok: [testbed-node-5] 2026-01-05 02:08:11.712649 | orchestrator | 2026-01-05 02:08:11.712656 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 02:08:11.712662 | orchestrator | Monday 05 January 2026 02:07:52 +0000 (0:00:00.691) 0:00:00.926 ******** 2026-01-05 02:08:11.712670 | orchestrator | ok: [testbed-node-0] => (item=enable_ceilometer_True) 2026-01-05 02:08:11.712677 | orchestrator | ok: [testbed-node-1] => (item=enable_ceilometer_True) 2026-01-05 02:08:11.712683 | orchestrator | ok: [testbed-node-2] => (item=enable_ceilometer_True) 2026-01-05 02:08:11.712689 | orchestrator | ok: [testbed-node-3] => (item=enable_ceilometer_True) 2026-01-05 02:08:11.712695 | orchestrator | ok: [testbed-node-4] => (item=enable_ceilometer_True) 2026-01-05 02:08:11.712701 | orchestrator | ok: [testbed-node-5] => (item=enable_ceilometer_True) 2026-01-05 02:08:11.712708 | orchestrator | 2026-01-05 02:08:11.712714 | orchestrator | PLAY [Apply role ceilometer] *************************************************** 2026-01-05 02:08:11.712721 | orchestrator | 2026-01-05 02:08:11.712728 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-01-05 02:08:11.712734 | orchestrator | Monday 05 January 2026 02:07:52 +0000 (0:00:00.662) 0:00:01.589 ******** 2026-01-05 02:08:11.712743 | orchestrator | included: /ansible/roles/ceilometer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 02:08:11.712751 | orchestrator | 2026-01-05 02:08:11.712757 | orchestrator | TASK [service-ks-register : ceilometer | Creating services] ******************** 2026-01-05 02:08:11.712764 | orchestrator | Monday 05 January 2026 02:07:54 +0000 (0:00:01.289) 0:00:02.879 ******** 2026-01-05 02:08:11.712770 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:08:11.712777 | orchestrator | 2026-01-05 02:08:11.712783 | orchestrator | TASK [service-ks-register : ceilometer | Creating endpoints] ******************* 2026-01-05 02:08:11.712789 | orchestrator | Monday 05 January 2026 02:07:54 +0000 (0:00:00.128) 0:00:03.007 ******** 2026-01-05 02:08:11.712796 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:08:11.712802 | orchestrator | 2026-01-05 02:08:11.712808 | orchestrator | TASK [service-ks-register : ceilometer | Creating projects] ******************** 2026-01-05 02:08:11.712815 | orchestrator | Monday 05 January 2026 02:07:54 +0000 (0:00:00.296) 0:00:03.303 ******** 2026-01-05 02:08:11.712822 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-05 02:08:11.712828 | orchestrator | 2026-01-05 02:08:11.712835 | orchestrator | TASK [service-ks-register : ceilometer | Creating users] *********************** 2026-01-05 02:08:11.712842 | orchestrator | Monday 05 January 2026 02:07:58 +0000 (0:00:03.521) 0:00:06.825 ******** 2026-01-05 02:08:11.712848 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-05 02:08:11.712854 | orchestrator | changed: [testbed-node-0] => (item=ceilometer -> service) 2026-01-05 02:08:11.712861 | orchestrator | 2026-01-05 02:08:11.712905 | orchestrator | TASK [service-ks-register : ceilometer | Creating roles] *********************** 2026-01-05 02:08:11.712912 | orchestrator | Monday 05 January 2026 02:08:02 +0000 (0:00:04.223) 0:00:11.049 ******** 2026-01-05 02:08:11.712919 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-05 02:08:11.712926 | orchestrator | 2026-01-05 02:08:11.712933 | orchestrator | TASK [service-ks-register : ceilometer | Granting user roles] ****************** 2026-01-05 02:08:11.712942 | orchestrator | Monday 05 January 2026 02:08:05 +0000 (0:00:03.384) 0:00:14.433 ******** 2026-01-05 02:08:11.712951 | orchestrator | changed: [testbed-node-0] => (item=ceilometer -> service -> admin) 2026-01-05 02:08:11.712959 | orchestrator | 2026-01-05 02:08:11.712968 | orchestrator | TASK [ceilometer : Associate the ResellerAdmin role and ceilometer user] ******* 2026-01-05 02:08:11.712977 | orchestrator | Monday 05 January 2026 02:08:09 +0000 (0:00:04.349) 0:00:18.782 ******** 2026-01-05 02:08:11.712985 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:08:11.712994 | orchestrator | 2026-01-05 02:08:11.713002 | orchestrator | TASK [ceilometer : Ensuring config directories exist] ************************** 2026-01-05 02:08:11.713011 | orchestrator | Monday 05 January 2026 02:08:10 +0000 (0:00:00.311) 0:00:19.094 ******** 2026-01-05 02:08:11.713029 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-01-05 02:08:11.713060 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-01-05 02:08:11.713069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-01-05 02:08:11.713082 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-01-05 02:08:11.713094 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-01-05 02:08:11.713103 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-01-05 02:08:11.713117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-01-05 02:08:11.713133 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-01-05 02:08:16.573527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-01-05 02:08:16.573669 | orchestrator | 2026-01-05 02:08:16.573696 | orchestrator | TASK [ceilometer : Check if the folder for custom meter definitions exist] ***** 2026-01-05 02:08:16.573743 | orchestrator | Monday 05 January 2026 02:08:11 +0000 (0:00:01.418) 0:00:20.512 ******** 2026-01-05 02:08:16.573764 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-05 02:08:16.573784 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-05 02:08:16.573800 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-05 02:08:16.573818 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 02:08:16.573837 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-05 02:08:16.573854 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-05 02:08:16.573871 | orchestrator | 2026-01-05 02:08:16.573889 | orchestrator | TASK [ceilometer : Set variable that indicates if we have a folder for custom meter YAML files] *** 2026-01-05 02:08:16.573909 | orchestrator | Monday 05 January 2026 02:08:13 +0000 (0:00:01.695) 0:00:22.208 ******** 2026-01-05 02:08:16.573930 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:08:16.573951 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:08:16.573968 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:08:16.573986 | orchestrator | ok: [testbed-node-3] 2026-01-05 02:08:16.574003 | orchestrator | ok: [testbed-node-4] 2026-01-05 02:08:16.574106 | orchestrator | ok: [testbed-node-5] 2026-01-05 02:08:16.574131 | orchestrator | 2026-01-05 02:08:16.574149 | orchestrator | TASK [ceilometer : Find all *.yaml files in custom meter definitions folder (if the folder exist)] *** 2026-01-05 02:08:16.574168 | orchestrator | Monday 05 January 2026 02:08:14 +0000 (0:00:00.628) 0:00:22.836 ******** 2026-01-05 02:08:16.574186 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:08:16.574200 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:08:16.574225 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:08:16.574237 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:08:16.574248 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:08:16.574284 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:08:16.574296 | orchestrator | 2026-01-05 02:08:16.574307 | orchestrator | TASK [ceilometer : Set the variable that control the copy of custom meter definitions] *** 2026-01-05 02:08:16.574319 | orchestrator | Monday 05 January 2026 02:08:14 +0000 (0:00:00.822) 0:00:23.658 ******** 2026-01-05 02:08:16.574330 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:08:16.574341 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:08:16.574352 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:08:16.574363 | orchestrator | ok: [testbed-node-3] 2026-01-05 02:08:16.574373 | orchestrator | ok: [testbed-node-4] 2026-01-05 02:08:16.574384 | orchestrator | ok: [testbed-node-5] 2026-01-05 02:08:16.574395 | orchestrator | 2026-01-05 02:08:16.574406 | orchestrator | TASK [ceilometer : Create default folder for custom meter definitions] ********* 2026-01-05 02:08:16.574417 | orchestrator | Monday 05 January 2026 02:08:15 +0000 (0:00:00.620) 0:00:24.279 ******** 2026-01-05 02:08:16.574459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-01-05 02:08:16.574475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-05 02:08:16.574488 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:08:16.574526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-01-05 02:08:16.574549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-05 02:08:16.574561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-01-05 02:08:16.574581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-05 02:08:16.574593 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:08:16.574605 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-01-05 02:08:16.574620 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:08:16.574638 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:08:16.574658 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-01-05 02:08:16.574677 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:08:16.574707 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-01-05 02:08:21.343813 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:08:21.343905 | orchestrator | 2026-01-05 02:08:21.343914 | orchestrator | TASK [ceilometer : Copying custom meter definitions to Ceilometer] ************* 2026-01-05 02:08:21.343922 | orchestrator | Monday 05 January 2026 02:08:16 +0000 (0:00:01.091) 0:00:25.371 ******** 2026-01-05 02:08:21.343942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-01-05 02:08:21.343969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-05 02:08:21.343976 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:08:21.343982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-01-05 02:08:21.343988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-05 02:08:21.343994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-01-05 02:08:21.343999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-05 02:08:21.344005 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:08:21.344027 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-01-05 02:08:21.344039 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:08:21.344044 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:08:21.344050 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-01-05 02:08:21.344055 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:08:21.344061 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-01-05 02:08:21.344066 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:08:21.344071 | orchestrator | 2026-01-05 02:08:21.344078 | orchestrator | TASK [ceilometer : Check if the folder ["/opt/configuration/environments/kolla/files/overlays/ceilometer/pollsters.d"] for dynamic pollsters definitions exist] *** 2026-01-05 02:08:21.344085 | orchestrator | Monday 05 January 2026 02:08:17 +0000 (0:00:00.863) 0:00:26.235 ******** 2026-01-05 02:08:21.344090 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 02:08:21.344095 | orchestrator | 2026-01-05 02:08:21.344101 | orchestrator | TASK [ceilometer : Set the variable that control the copy of dynamic pollsters definitions] *** 2026-01-05 02:08:21.344106 | orchestrator | Monday 05 January 2026 02:08:18 +0000 (0:00:00.698) 0:00:26.933 ******** 2026-01-05 02:08:21.344112 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:08:21.344117 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:08:21.344122 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:08:21.344127 | orchestrator | ok: [testbed-node-3] 2026-01-05 02:08:21.344132 | orchestrator | ok: [testbed-node-4] 2026-01-05 02:08:21.344137 | orchestrator | ok: [testbed-node-5] 2026-01-05 02:08:21.344142 | orchestrator | 2026-01-05 02:08:21.344148 | orchestrator | TASK [ceilometer : Clean default folder for dynamic pollsters definitions] ***** 2026-01-05 02:08:21.344153 | orchestrator | Monday 05 January 2026 02:08:18 +0000 (0:00:00.832) 0:00:27.766 ******** 2026-01-05 02:08:21.344158 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:08:21.344163 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:08:21.344168 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:08:21.344218 | orchestrator | ok: [testbed-node-3] 2026-01-05 02:08:21.344225 | orchestrator | ok: [testbed-node-4] 2026-01-05 02:08:21.344230 | orchestrator | ok: [testbed-node-5] 2026-01-05 02:08:21.344235 | orchestrator | 2026-01-05 02:08:21.344240 | orchestrator | TASK [ceilometer : Create default folder for dynamic pollsters definitions] **** 2026-01-05 02:08:21.344245 | orchestrator | Monday 05 January 2026 02:08:19 +0000 (0:00:00.967) 0:00:28.734 ******** 2026-01-05 02:08:21.344250 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:08:21.344261 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:08:21.344266 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:08:21.344271 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:08:21.344276 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:08:21.344281 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:08:21.344286 | orchestrator | 2026-01-05 02:08:21.344291 | orchestrator | TASK [ceilometer : Copying dynamic pollsters definitions] ********************** 2026-01-05 02:08:21.344296 | orchestrator | Monday 05 January 2026 02:08:20 +0000 (0:00:00.789) 0:00:29.524 ******** 2026-01-05 02:08:21.344301 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:08:21.344306 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:08:21.344311 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:08:21.344317 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:08:21.344322 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:08:21.344329 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:08:21.344338 | orchestrator | 2026-01-05 02:08:26.680615 | orchestrator | TASK [ceilometer : Check if custom polling.yaml exists] ************************ 2026-01-05 02:08:26.680696 | orchestrator | Monday 05 January 2026 02:08:21 +0000 (0:00:00.623) 0:00:30.148 ******** 2026-01-05 02:08:26.680703 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 02:08:26.680708 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-05 02:08:26.680712 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-05 02:08:26.680717 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-05 02:08:26.680721 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-05 02:08:26.680737 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-05 02:08:26.680741 | orchestrator | 2026-01-05 02:08:26.680745 | orchestrator | TASK [ceilometer : Copying over polling.yaml] ********************************** 2026-01-05 02:08:26.680749 | orchestrator | Monday 05 January 2026 02:08:22 +0000 (0:00:01.525) 0:00:31.673 ******** 2026-01-05 02:08:26.680755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-01-05 02:08:26.680763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-05 02:08:26.680769 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:08:26.680773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-01-05 02:08:26.680787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-05 02:08:26.680815 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:08:26.680820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-01-05 02:08:26.680836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-05 02:08:26.680841 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:08:26.680848 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-01-05 02:08:26.680860 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:08:26.680864 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-01-05 02:08:26.680868 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:08:26.680872 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-01-05 02:08:26.680881 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:08:26.680885 | orchestrator | 2026-01-05 02:08:26.680890 | orchestrator | TASK [ceilometer : Set ceilometer polling file's path] ************************* 2026-01-05 02:08:26.680894 | orchestrator | Monday 05 January 2026 02:08:24 +0000 (0:00:01.243) 0:00:32.917 ******** 2026-01-05 02:08:26.680897 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:08:26.680901 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:08:26.680905 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:08:26.680915 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:08:26.680921 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:08:26.680927 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:08:26.680933 | orchestrator | 2026-01-05 02:08:26.680939 | orchestrator | TASK [ceilometer : Check custom gnocchi_resources.yaml exists] ***************** 2026-01-05 02:08:26.680945 | orchestrator | Monday 05 January 2026 02:08:24 +0000 (0:00:00.606) 0:00:33.524 ******** 2026-01-05 02:08:26.680972 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 02:08:26.680979 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-05 02:08:26.680985 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-05 02:08:26.680990 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-05 02:08:26.680996 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-05 02:08:26.681001 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-05 02:08:26.681007 | orchestrator | 2026-01-05 02:08:26.681013 | orchestrator | TASK [ceilometer : Copying over gnocchi_resources.yaml] ************************ 2026-01-05 02:08:26.681018 | orchestrator | Monday 05 January 2026 02:08:26 +0000 (0:00:01.623) 0:00:35.147 ******** 2026-01-05 02:08:26.681030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-01-05 02:08:32.723200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-05 02:08:32.723314 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:08:32.723328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-01-05 02:08:32.723339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-05 02:08:32.723382 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:08:32.723400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-01-05 02:08:32.723473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-05 02:08:32.723485 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-01-05 02:08:32.723494 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:08:32.723503 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:08:32.723534 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-01-05 02:08:32.723542 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:08:32.723550 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-01-05 02:08:32.723569 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:08:32.723582 | orchestrator | 2026-01-05 02:08:32.723595 | orchestrator | TASK [ceilometer : Set ceilometer gnocchi_resources file's path] *************** 2026-01-05 02:08:32.723607 | orchestrator | Monday 05 January 2026 02:08:27 +0000 (0:00:00.900) 0:00:36.047 ******** 2026-01-05 02:08:32.723619 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:08:32.723630 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:08:32.723639 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:08:32.723649 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:08:32.723659 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:08:32.723670 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:08:32.723680 | orchestrator | 2026-01-05 02:08:32.723691 | orchestrator | TASK [ceilometer : Check if policies shall be overwritten] ********************* 2026-01-05 02:08:32.723702 | orchestrator | Monday 05 January 2026 02:08:28 +0000 (0:00:00.780) 0:00:36.828 ******** 2026-01-05 02:08:32.723712 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:08:32.723723 | orchestrator | 2026-01-05 02:08:32.723734 | orchestrator | TASK [ceilometer : Set ceilometer policy file] ********************************* 2026-01-05 02:08:32.723745 | orchestrator | Monday 05 January 2026 02:08:28 +0000 (0:00:00.145) 0:00:36.973 ******** 2026-01-05 02:08:32.723756 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:08:32.723768 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:08:32.723779 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:08:32.723792 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:08:32.723803 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:08:32.723815 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:08:32.723826 | orchestrator | 2026-01-05 02:08:32.723837 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-01-05 02:08:32.723849 | orchestrator | Monday 05 January 2026 02:08:28 +0000 (0:00:00.607) 0:00:37.581 ******** 2026-01-05 02:08:32.723862 | orchestrator | included: /ansible/roles/ceilometer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 02:08:32.723875 | orchestrator | 2026-01-05 02:08:32.723887 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over extra CA certificates] ***** 2026-01-05 02:08:32.723898 | orchestrator | Monday 05 January 2026 02:08:30 +0000 (0:00:01.351) 0:00:38.933 ******** 2026-01-05 02:08:32.723912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-01-05 02:08:32.723938 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-01-05 02:08:33.230384 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-01-05 02:08:33.230579 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-01-05 02:08:33.230594 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-01-05 02:08:33.230601 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-01-05 02:08:33.230609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-01-05 02:08:33.230617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-01-05 02:08:33.230664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-01-05 02:08:33.230672 | orchestrator | 2026-01-05 02:08:33.230680 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS certificate] *** 2026-01-05 02:08:33.230687 | orchestrator | Monday 05 January 2026 02:08:32 +0000 (0:00:02.592) 0:00:41.525 ******** 2026-01-05 02:08:33.230694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-01-05 02:08:33.230701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-05 02:08:33.230709 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:08:33.230716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-01-05 02:08:33.230722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-05 02:08:33.230728 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:08:33.230735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-01-05 02:08:33.230758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-05 02:08:35.247685 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:08:35.247766 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-01-05 02:08:35.247776 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:08:35.247782 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-01-05 02:08:35.247787 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:08:35.247792 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-01-05 02:08:35.247797 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:08:35.247802 | orchestrator | 2026-01-05 02:08:35.247807 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS key] *** 2026-01-05 02:08:35.247813 | orchestrator | Monday 05 January 2026 02:08:33 +0000 (0:00:01.032) 0:00:42.558 ******** 2026-01-05 02:08:35.247819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-01-05 02:08:35.247862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-05 02:08:35.247868 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:08:35.247886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-01-05 02:08:35.247892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-05 02:08:35.247896 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:08:35.247901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-01-05 02:08:35.247906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-05 02:08:35.247911 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-01-05 02:08:35.247921 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:08:35.247925 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:08:35.247934 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-01-05 02:08:35.247939 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:08:35.247949 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-01-05 02:08:42.877616 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:08:42.877733 | orchestrator | 2026-01-05 02:08:42.877751 | orchestrator | TASK [ceilometer : Copying over config.json files for services] **************** 2026-01-05 02:08:42.877766 | orchestrator | Monday 05 January 2026 02:08:35 +0000 (0:00:01.489) 0:00:44.047 ******** 2026-01-05 02:08:42.877783 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-01-05 02:08:42.877800 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-01-05 02:08:42.877815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-01-05 02:08:42.877860 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-01-05 02:08:42.877893 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-01-05 02:08:42.877930 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-01-05 02:08:42.877946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-01-05 02:08:42.877961 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-01-05 02:08:42.877973 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-01-05 02:08:42.877994 | orchestrator | 2026-01-05 02:08:42.878007 | orchestrator | TASK [ceilometer : Copying over ceilometer.conf] ******************************* 2026-01-05 02:08:42.878085 | orchestrator | Monday 05 January 2026 02:08:37 +0000 (0:00:02.426) 0:00:46.473 ******** 2026-01-05 02:08:42.878100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-01-05 02:08:42.878122 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-01-05 02:08:42.878145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-01-05 02:08:52.285947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-01-05 02:08:52.286162 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-01-05 02:08:52.286214 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-01-05 02:08:52.286230 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-01-05 02:08:52.286273 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-01-05 02:08:52.286298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-01-05 02:08:52.286315 | orchestrator | 2026-01-05 02:08:52.286333 | orchestrator | TASK [ceilometer : Check custom event_definitions.yaml exists] ***************** 2026-01-05 02:08:52.286350 | orchestrator | Monday 05 January 2026 02:08:42 +0000 (0:00:05.204) 0:00:51.678 ******** 2026-01-05 02:08:52.286444 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-05 02:08:52.286468 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 02:08:52.286485 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-05 02:08:52.286502 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-05 02:08:52.286520 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-05 02:08:52.286536 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-05 02:08:52.286554 | orchestrator | 2026-01-05 02:08:52.286572 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml] ************************ 2026-01-05 02:08:52.286589 | orchestrator | Monday 05 January 2026 02:08:44 +0000 (0:00:01.447) 0:00:53.126 ******** 2026-01-05 02:08:52.286606 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:08:52.286624 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:08:52.286642 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:08:52.286659 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:08:52.286675 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:08:52.286700 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:08:52.286712 | orchestrator | 2026-01-05 02:08:52.286724 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml for notification service] *** 2026-01-05 02:08:52.286737 | orchestrator | Monday 05 January 2026 02:08:45 +0000 (0:00:00.806) 0:00:53.933 ******** 2026-01-05 02:08:52.286748 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:08:52.286760 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:08:52.286772 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:08:52.286783 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:08:52.286795 | orchestrator | changed: [testbed-node-1] 2026-01-05 02:08:52.286807 | orchestrator | changed: [testbed-node-2] 2026-01-05 02:08:52.286818 | orchestrator | 2026-01-05 02:08:52.286830 | orchestrator | TASK [ceilometer : Copying over event_pipeline.yaml] *************************** 2026-01-05 02:08:52.286841 | orchestrator | Monday 05 January 2026 02:08:46 +0000 (0:00:01.485) 0:00:55.418 ******** 2026-01-05 02:08:52.286853 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:08:52.286865 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:08:52.286875 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:08:52.286885 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:08:52.286895 | orchestrator | changed: [testbed-node-1] 2026-01-05 02:08:52.286905 | orchestrator | changed: [testbed-node-2] 2026-01-05 02:08:52.286914 | orchestrator | 2026-01-05 02:08:52.286924 | orchestrator | TASK [ceilometer : Check custom pipeline.yaml exists] ************************** 2026-01-05 02:08:52.286933 | orchestrator | Monday 05 January 2026 02:08:48 +0000 (0:00:01.731) 0:00:57.150 ******** 2026-01-05 02:08:52.286943 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 02:08:52.286952 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-05 02:08:52.286962 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-05 02:08:52.286971 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-05 02:08:52.286981 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-05 02:08:52.286991 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-05 02:08:52.287000 | orchestrator | 2026-01-05 02:08:52.287010 | orchestrator | TASK [ceilometer : Copying over custom pipeline.yaml file] ********************* 2026-01-05 02:08:52.287019 | orchestrator | Monday 05 January 2026 02:08:49 +0000 (0:00:01.362) 0:00:58.512 ******** 2026-01-05 02:08:52.287030 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-01-05 02:08:52.287049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-01-05 02:08:52.287060 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-01-05 02:08:52.287088 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-01-05 02:08:53.104692 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-01-05 02:08:53.104786 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-01-05 02:08:53.104799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-01-05 02:08:53.104824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-01-05 02:08:53.104832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-01-05 02:08:53.104859 | orchestrator | 2026-01-05 02:08:53.104868 | orchestrator | TASK [ceilometer : Copying over pipeline.yaml file] **************************** 2026-01-05 02:08:53.104877 | orchestrator | Monday 05 January 2026 02:08:52 +0000 (0:00:02.566) 0:01:01.079 ******** 2026-01-05 02:08:53.104886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-01-05 02:08:53.104908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-05 02:08:53.104917 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:08:53.104926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-01-05 02:08:53.104934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-05 02:08:53.104942 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:08:53.104954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-01-05 02:08:53.104962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-05 02:08:53.104975 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:08:53.104983 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-01-05 02:08:53.104996 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:08:53.105016 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-01-05 02:08:56.874223 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:08:56.874322 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-01-05 02:08:56.874336 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:08:56.874343 | orchestrator | 2026-01-05 02:08:56.874351 | orchestrator | TASK [ceilometer : Copying VMware vCenter CA file] ***************************** 2026-01-05 02:08:56.874359 | orchestrator | Monday 05 January 2026 02:08:53 +0000 (0:00:00.829) 0:01:01.909 ******** 2026-01-05 02:08:56.874366 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:08:56.874371 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:08:56.874377 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:08:56.874383 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:08:56.874433 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:08:56.874440 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:08:56.874446 | orchestrator | 2026-01-05 02:08:56.874452 | orchestrator | TASK [ceilometer : Copying over existing policy file] ************************** 2026-01-05 02:08:56.874458 | orchestrator | Monday 05 January 2026 02:08:53 +0000 (0:00:00.845) 0:01:02.755 ******** 2026-01-05 02:08:56.874482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-01-05 02:08:56.874511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-05 02:08:56.874519 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:08:56.874525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-01-05 02:08:56.874532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-05 02:08:56.874539 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:08:56.874564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-01-05 02:08:56.874571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-05 02:08:56.874578 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:08:56.874596 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-01-05 02:08:56.874604 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:08:56.874610 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-01-05 02:08:56.874617 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:08:56.874623 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-01-05 02:08:56.874629 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:08:56.874635 | orchestrator | 2026-01-05 02:08:56.874641 | orchestrator | TASK [ceilometer : Check ceilometer containers] ******************************** 2026-01-05 02:08:56.874647 | orchestrator | Monday 05 January 2026 02:08:54 +0000 (0:00:00.843) 0:01:03.598 ******** 2026-01-05 02:08:56.874660 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-01-05 02:09:18.330176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-01-05 02:09:18.330307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-01-05 02:09:18.330336 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-01-05 02:09:18.330344 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-01-05 02:09:18.330348 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-01-05 02:09:18.330353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-01-05 02:09:18.330431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-01-05 02:09:18.330444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-01-05 02:09:18.330448 | orchestrator | 2026-01-05 02:09:18.330454 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-01-05 02:09:18.330459 | orchestrator | Monday 05 January 2026 02:08:56 +0000 (0:00:02.080) 0:01:05.679 ******** 2026-01-05 02:09:18.330464 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:09:18.330469 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:09:18.330473 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:09:18.330477 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:09:18.330480 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:09:18.330484 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:09:18.330488 | orchestrator | 2026-01-05 02:09:18.330492 | orchestrator | TASK [ceilometer : Running Ceilometer bootstrap container] ********************* 2026-01-05 02:09:18.330500 | orchestrator | Monday 05 January 2026 02:08:57 +0000 (0:00:00.823) 0:01:06.503 ******** 2026-01-05 02:09:18.330504 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:09:18.330507 | orchestrator | 2026-01-05 02:09:18.330511 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-01-05 02:09:18.330515 | orchestrator | Monday 05 January 2026 02:09:01 +0000 (0:00:04.210) 0:01:10.713 ******** 2026-01-05 02:09:18.330519 | orchestrator | 2026-01-05 02:09:18.330523 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-01-05 02:09:18.330526 | orchestrator | Monday 05 January 2026 02:09:01 +0000 (0:00:00.073) 0:01:10.786 ******** 2026-01-05 02:09:18.330530 | orchestrator | 2026-01-05 02:09:18.330534 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-01-05 02:09:18.330538 | orchestrator | Monday 05 January 2026 02:09:02 +0000 (0:00:00.077) 0:01:10.864 ******** 2026-01-05 02:09:18.330541 | orchestrator | 2026-01-05 02:09:18.330545 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-01-05 02:09:18.330549 | orchestrator | Monday 05 January 2026 02:09:02 +0000 (0:00:00.083) 0:01:10.948 ******** 2026-01-05 02:09:18.330553 | orchestrator | 2026-01-05 02:09:18.330556 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-01-05 02:09:18.330560 | orchestrator | Monday 05 January 2026 02:09:02 +0000 (0:00:00.074) 0:01:11.022 ******** 2026-01-05 02:09:18.330564 | orchestrator | 2026-01-05 02:09:18.330568 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-01-05 02:09:18.330571 | orchestrator | Monday 05 January 2026 02:09:02 +0000 (0:00:00.069) 0:01:11.092 ******** 2026-01-05 02:09:18.330575 | orchestrator | 2026-01-05 02:09:18.330579 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-notification container] ******* 2026-01-05 02:09:18.330583 | orchestrator | Monday 05 January 2026 02:09:02 +0000 (0:00:00.074) 0:01:11.166 ******** 2026-01-05 02:09:18.330586 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:09:18.330591 | orchestrator | changed: [testbed-node-2] 2026-01-05 02:09:18.330595 | orchestrator | changed: [testbed-node-1] 2026-01-05 02:09:18.330598 | orchestrator | 2026-01-05 02:09:18.330603 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-central container] ************ 2026-01-05 02:09:18.330609 | orchestrator | Monday 05 January 2026 02:09:07 +0000 (0:00:05.556) 0:01:16.723 ******** 2026-01-05 02:09:18.330614 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:09:18.330643 | orchestrator | changed: [testbed-node-1] 2026-01-05 02:09:18.330648 | orchestrator | changed: [testbed-node-2] 2026-01-05 02:09:18.330660 | orchestrator | 2026-01-05 02:09:18.330666 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-compute container] ************ 2026-01-05 02:09:18.330673 | orchestrator | Monday 05 January 2026 02:09:12 +0000 (0:00:04.609) 0:01:21.332 ******** 2026-01-05 02:09:18.330679 | orchestrator | changed: [testbed-node-3] 2026-01-05 02:09:18.330685 | orchestrator | changed: [testbed-node-4] 2026-01-05 02:09:18.330694 | orchestrator | changed: [testbed-node-5] 2026-01-05 02:09:18.330701 | orchestrator | 2026-01-05 02:09:18.330708 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 02:09:18.330716 | orchestrator | testbed-node-0 : ok=29  changed=13  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-01-05 02:09:18.330724 | orchestrator | testbed-node-1 : ok=23  changed=10  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-05 02:09:18.330738 | orchestrator | testbed-node-2 : ok=23  changed=10  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-05 02:09:18.817349 | orchestrator | testbed-node-3 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-01-05 02:09:18.817498 | orchestrator | testbed-node-4 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-01-05 02:09:18.817508 | orchestrator | testbed-node-5 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-01-05 02:09:18.817515 | orchestrator | 2026-01-05 02:09:18.817524 | orchestrator | 2026-01-05 02:09:18.817531 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 02:09:18.817540 | orchestrator | Monday 05 January 2026 02:09:18 +0000 (0:00:05.793) 0:01:27.126 ******** 2026-01-05 02:09:18.817546 | orchestrator | =============================================================================== 2026-01-05 02:09:18.817552 | orchestrator | ceilometer : Restart ceilometer-compute container ----------------------- 5.79s 2026-01-05 02:09:18.817559 | orchestrator | ceilometer : Restart ceilometer-notification container ------------------ 5.56s 2026-01-05 02:09:18.817565 | orchestrator | ceilometer : Copying over ceilometer.conf ------------------------------- 5.20s 2026-01-05 02:09:18.817571 | orchestrator | ceilometer : Restart ceilometer-central container ----------------------- 4.61s 2026-01-05 02:09:18.817578 | orchestrator | service-ks-register : ceilometer | Granting user roles ------------------ 4.35s 2026-01-05 02:09:18.817584 | orchestrator | service-ks-register : ceilometer | Creating users ----------------------- 4.22s 2026-01-05 02:09:18.817590 | orchestrator | ceilometer : Running Ceilometer bootstrap container --------------------- 4.21s 2026-01-05 02:09:18.817597 | orchestrator | service-ks-register : ceilometer | Creating projects -------------------- 3.52s 2026-01-05 02:09:18.817603 | orchestrator | service-ks-register : ceilometer | Creating roles ----------------------- 3.38s 2026-01-05 02:09:18.817609 | orchestrator | service-cert-copy : ceilometer | Copying over extra CA certificates ----- 2.59s 2026-01-05 02:09:18.817615 | orchestrator | ceilometer : Copying over custom pipeline.yaml file --------------------- 2.57s 2026-01-05 02:09:18.817642 | orchestrator | ceilometer : Copying over config.json files for services ---------------- 2.43s 2026-01-05 02:09:18.817652 | orchestrator | ceilometer : Check ceilometer containers -------------------------------- 2.08s 2026-01-05 02:09:18.817658 | orchestrator | ceilometer : Copying over event_pipeline.yaml --------------------------- 1.73s 2026-01-05 02:09:18.817664 | orchestrator | ceilometer : Check if the folder for custom meter definitions exist ----- 1.70s 2026-01-05 02:09:18.817670 | orchestrator | ceilometer : Check custom gnocchi_resources.yaml exists ----------------- 1.62s 2026-01-05 02:09:18.817677 | orchestrator | ceilometer : Check if custom polling.yaml exists ------------------------ 1.53s 2026-01-05 02:09:18.817683 | orchestrator | service-cert-copy : ceilometer | Copying over backend internal TLS key --- 1.49s 2026-01-05 02:09:18.817709 | orchestrator | ceilometer : Copying over event_definitions.yaml for notification service --- 1.49s 2026-01-05 02:09:18.817717 | orchestrator | ceilometer : Check custom event_definitions.yaml exists ----------------- 1.45s 2026-01-05 02:09:21.264714 | orchestrator | 2026-01-05 02:09:21 | INFO  | Task c713ea2f-42d3-44b8-8dce-aeb3f680455f (aodh) was prepared for execution. 2026-01-05 02:09:21.264792 | orchestrator | 2026-01-05 02:09:21 | INFO  | It takes a moment until task c713ea2f-42d3-44b8-8dce-aeb3f680455f (aodh) has been started and output is visible here. 2026-01-05 02:09:54.578694 | orchestrator | 2026-01-05 02:09:54.578822 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 02:09:54.578838 | orchestrator | 2026-01-05 02:09:54.578846 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 02:09:54.578854 | orchestrator | Monday 05 January 2026 02:09:25 +0000 (0:00:00.347) 0:00:00.347 ******** 2026-01-05 02:09:54.578861 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:09:54.578869 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:09:54.578876 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:09:54.578882 | orchestrator | 2026-01-05 02:09:54.578889 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 02:09:54.578896 | orchestrator | Monday 05 January 2026 02:09:25 +0000 (0:00:00.328) 0:00:00.675 ******** 2026-01-05 02:09:54.578903 | orchestrator | ok: [testbed-node-0] => (item=enable_aodh_True) 2026-01-05 02:09:54.578908 | orchestrator | ok: [testbed-node-1] => (item=enable_aodh_True) 2026-01-05 02:09:54.578913 | orchestrator | ok: [testbed-node-2] => (item=enable_aodh_True) 2026-01-05 02:09:54.578917 | orchestrator | 2026-01-05 02:09:54.578921 | orchestrator | PLAY [Apply role aodh] ********************************************************* 2026-01-05 02:09:54.578925 | orchestrator | 2026-01-05 02:09:54.578929 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-01-05 02:09:54.578934 | orchestrator | Monday 05 January 2026 02:09:26 +0000 (0:00:00.469) 0:00:01.145 ******** 2026-01-05 02:09:54.578938 | orchestrator | included: /ansible/roles/aodh/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 02:09:54.578944 | orchestrator | 2026-01-05 02:09:54.578948 | orchestrator | TASK [service-ks-register : aodh | Creating services] ************************** 2026-01-05 02:09:54.578951 | orchestrator | Monday 05 January 2026 02:09:26 +0000 (0:00:00.570) 0:00:01.716 ******** 2026-01-05 02:09:54.578955 | orchestrator | changed: [testbed-node-0] => (item=aodh (alarming)) 2026-01-05 02:09:54.578959 | orchestrator | 2026-01-05 02:09:54.578963 | orchestrator | TASK [service-ks-register : aodh | Creating endpoints] ************************* 2026-01-05 02:09:54.578967 | orchestrator | Monday 05 January 2026 02:09:30 +0000 (0:00:03.740) 0:00:05.457 ******** 2026-01-05 02:09:54.578971 | orchestrator | changed: [testbed-node-0] => (item=aodh -> https://api-int.testbed.osism.xyz:8042 -> internal) 2026-01-05 02:09:54.578975 | orchestrator | changed: [testbed-node-0] => (item=aodh -> https://api.testbed.osism.xyz:8042 -> public) 2026-01-05 02:09:54.578979 | orchestrator | 2026-01-05 02:09:54.578983 | orchestrator | TASK [service-ks-register : aodh | Creating projects] ************************** 2026-01-05 02:09:54.578987 | orchestrator | Monday 05 January 2026 02:09:37 +0000 (0:00:06.790) 0:00:12.247 ******** 2026-01-05 02:09:54.578991 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-05 02:09:54.578996 | orchestrator | 2026-01-05 02:09:54.578999 | orchestrator | TASK [service-ks-register : aodh | Creating users] ***************************** 2026-01-05 02:09:54.579004 | orchestrator | Monday 05 January 2026 02:09:41 +0000 (0:00:03.590) 0:00:15.837 ******** 2026-01-05 02:09:54.579008 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-05 02:09:54.579012 | orchestrator | changed: [testbed-node-0] => (item=aodh -> service) 2026-01-05 02:09:54.579016 | orchestrator | 2026-01-05 02:09:54.579020 | orchestrator | TASK [service-ks-register : aodh | Creating roles] ***************************** 2026-01-05 02:09:54.579023 | orchestrator | Monday 05 January 2026 02:09:45 +0000 (0:00:04.041) 0:00:19.878 ******** 2026-01-05 02:09:54.579046 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-05 02:09:54.579050 | orchestrator | 2026-01-05 02:09:54.579057 | orchestrator | TASK [service-ks-register : aodh | Granting user roles] ************************ 2026-01-05 02:09:54.579063 | orchestrator | Monday 05 January 2026 02:09:48 +0000 (0:00:03.416) 0:00:23.294 ******** 2026-01-05 02:09:54.579068 | orchestrator | changed: [testbed-node-0] => (item=aodh -> service -> admin) 2026-01-05 02:09:54.579074 | orchestrator | 2026-01-05 02:09:54.579081 | orchestrator | TASK [aodh : Ensuring config directories exist] ******************************** 2026-01-05 02:09:54.579085 | orchestrator | Monday 05 January 2026 02:09:52 +0000 (0:00:03.910) 0:00:27.205 ******** 2026-01-05 02:09:54.579104 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-05 02:09:54.579130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-05 02:09:54.579138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-05 02:09:54.579146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-01-05 02:09:54.579154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-01-05 02:09:54.579167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-01-05 02:09:54.579178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-01-05 02:09:54.579191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-01-05 02:09:55.895273 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-01-05 02:09:55.895442 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-01-05 02:09:55.895458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-01-05 02:09:55.895488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-01-05 02:09:55.895494 | orchestrator | 2026-01-05 02:09:55.895500 | orchestrator | TASK [aodh : Check if policies shall be overwritten] *************************** 2026-01-05 02:09:55.895505 | orchestrator | Monday 05 January 2026 02:09:54 +0000 (0:00:02.112) 0:00:29.317 ******** 2026-01-05 02:09:55.895509 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:09:55.895514 | orchestrator | 2026-01-05 02:09:55.895518 | orchestrator | TASK [aodh : Set aodh policy file] ********************************************* 2026-01-05 02:09:55.895522 | orchestrator | Monday 05 January 2026 02:09:54 +0000 (0:00:00.137) 0:00:29.454 ******** 2026-01-05 02:09:55.895526 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:09:55.895530 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:09:55.895533 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:09:55.895537 | orchestrator | 2026-01-05 02:09:55.895541 | orchestrator | TASK [aodh : Copying over existing policy file] ******************************** 2026-01-05 02:09:55.895557 | orchestrator | Monday 05 January 2026 02:09:55 +0000 (0:00:00.518) 0:00:29.972 ******** 2026-01-05 02:09:55.895562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-05 02:09:55.895584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-05 02:09:55.895588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-05 02:09:55.895592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-05 02:09:55.895601 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:09:55.895605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-05 02:09:55.895612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-05 02:09:55.895617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-05 02:09:55.895626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-05 02:10:00.985917 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:10:00.986003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-05 02:10:00.986074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-05 02:10:00.986085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-05 02:10:00.986093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-05 02:10:00.986101 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:10:00.986106 | orchestrator | 2026-01-05 02:10:00.986112 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-01-05 02:10:00.986128 | orchestrator | Monday 05 January 2026 02:09:55 +0000 (0:00:00.668) 0:00:30.641 ******** 2026-01-05 02:10:00.986133 | orchestrator | included: /ansible/roles/aodh/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 02:10:00.986138 | orchestrator | 2026-01-05 02:10:00.986142 | orchestrator | TASK [service-cert-copy : aodh | Copying over extra CA certificates] *********** 2026-01-05 02:10:00.986146 | orchestrator | Monday 05 January 2026 02:09:56 +0000 (0:00:00.767) 0:00:31.408 ******** 2026-01-05 02:10:00.986150 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-05 02:10:00.986168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-05 02:10:00.986177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-05 02:10:00.986182 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-01-05 02:10:00.986188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-01-05 02:10:00.986192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-01-05 02:10:00.986196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-01-05 02:10:00.986205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-01-05 02:10:01.690523 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-01-05 02:10:01.690653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-01-05 02:10:01.690683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-01-05 02:10:01.690725 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-01-05 02:10:01.690744 | orchestrator | 2026-01-05 02:10:01.690763 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS certificate] *** 2026-01-05 02:10:01.690783 | orchestrator | Monday 05 January 2026 02:10:00 +0000 (0:00:04.317) 0:00:35.726 ******** 2026-01-05 02:10:01.690844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-05 02:10:01.690867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-05 02:10:01.690944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-05 02:10:01.690968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-05 02:10:01.690987 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:10:01.691007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-05 02:10:01.691036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-05 02:10:01.691057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-05 02:10:01.691079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-05 02:10:01.691111 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:10:01.691142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-05 02:10:02.859889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-05 02:10:02.859971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-05 02:10:02.859990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-05 02:10:02.859995 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:10:02.860001 | orchestrator | 2026-01-05 02:10:02.860005 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS key] ******** 2026-01-05 02:10:02.860011 | orchestrator | Monday 05 January 2026 02:10:01 +0000 (0:00:00.710) 0:00:36.436 ******** 2026-01-05 02:10:02.860015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-05 02:10:02.860036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-05 02:10:02.860040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-05 02:10:02.860055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-05 02:10:02.860060 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:10:02.860064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-05 02:10:02.860071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-05 02:10:02.860075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-05 02:10:02.860082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-05 02:10:02.860086 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:10:02.860094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-05 02:10:07.108947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-05 02:10:07.109054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-05 02:10:07.109080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-05 02:10:07.109088 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:10:07.109098 | orchestrator | 2026-01-05 02:10:07.109107 | orchestrator | TASK [aodh : Copying over config.json files for services] ********************** 2026-01-05 02:10:07.109115 | orchestrator | Monday 05 January 2026 02:10:02 +0000 (0:00:01.168) 0:00:37.605 ******** 2026-01-05 02:10:07.109142 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-05 02:10:07.109150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-05 02:10:07.109174 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-05 02:10:07.109180 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-01-05 02:10:07.109186 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-01-05 02:10:07.109196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-01-05 02:10:07.109209 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-01-05 02:10:07.109215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-01-05 02:10:07.109222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-01-05 02:10:07.109235 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-01-05 02:10:15.932689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-01-05 02:10:15.932801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-01-05 02:10:15.932829 | orchestrator | 2026-01-05 02:10:15.932839 | orchestrator | TASK [aodh : Copying over aodh.conf] ******************************************* 2026-01-05 02:10:15.932846 | orchestrator | Monday 05 January 2026 02:10:07 +0000 (0:00:04.246) 0:00:41.851 ******** 2026-01-05 02:10:15.932854 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-05 02:10:15.932864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-05 02:10:15.932870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-05 02:10:15.932891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-01-05 02:10:15.932898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-01-05 02:10:15.932914 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-01-05 02:10:15.932921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-01-05 02:10:15.932927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-01-05 02:10:15.932934 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-01-05 02:10:15.932941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-01-05 02:10:15.932953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-01-05 02:10:21.184621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-01-05 02:10:21.184791 | orchestrator | 2026-01-05 02:10:21.184827 | orchestrator | TASK [aodh : Copying over wsgi-aodh files for services] ************************ 2026-01-05 02:10:21.184870 | orchestrator | Monday 05 January 2026 02:10:15 +0000 (0:00:08.822) 0:00:50.673 ******** 2026-01-05 02:10:21.184891 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:10:21.184904 | orchestrator | changed: [testbed-node-1] 2026-01-05 02:10:21.184914 | orchestrator | changed: [testbed-node-2] 2026-01-05 02:10:21.184925 | orchestrator | 2026-01-05 02:10:21.184936 | orchestrator | TASK [aodh : Check aodh containers] ******************************************** 2026-01-05 02:10:21.184947 | orchestrator | Monday 05 January 2026 02:10:17 +0000 (0:00:01.815) 0:00:52.488 ******** 2026-01-05 02:10:21.184960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-05 02:10:21.184975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-05 02:10:21.184987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-05 02:10:21.185020 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-01-05 02:10:21.185049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-01-05 02:10:21.185061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-01-05 02:10:21.185075 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-01-05 02:10:21.185089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-01-05 02:10:21.185102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-01-05 02:10:21.185116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-01-05 02:10:21.185138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-01-05 02:11:20.798339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-01-05 02:11:20.798453 | orchestrator | 2026-01-05 02:11:20.798468 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-01-05 02:11:20.798477 | orchestrator | Monday 05 January 2026 02:10:21 +0000 (0:00:03.438) 0:00:55.926 ******** 2026-01-05 02:11:20.798484 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:11:20.798492 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:11:20.798499 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:11:20.798506 | orchestrator | 2026-01-05 02:11:20.798512 | orchestrator | TASK [aodh : Creating aodh database] ******************************************* 2026-01-05 02:11:20.798530 | orchestrator | Monday 05 January 2026 02:10:21 +0000 (0:00:00.360) 0:00:56.286 ******** 2026-01-05 02:11:20.798538 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:11:20.798545 | orchestrator | 2026-01-05 02:11:20.798552 | orchestrator | TASK [aodh : Creating aodh database user and setting permissions] ************** 2026-01-05 02:11:20.798559 | orchestrator | Monday 05 January 2026 02:10:23 +0000 (0:00:02.292) 0:00:58.579 ******** 2026-01-05 02:11:20.798566 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:11:20.798573 | orchestrator | 2026-01-05 02:11:20.798581 | orchestrator | TASK [aodh : Running aodh bootstrap container] ********************************* 2026-01-05 02:11:20.798589 | orchestrator | Monday 05 January 2026 02:10:26 +0000 (0:00:02.534) 0:01:01.113 ******** 2026-01-05 02:11:20.798596 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:11:20.798603 | orchestrator | 2026-01-05 02:11:20.798610 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-01-05 02:11:20.798617 | orchestrator | Monday 05 January 2026 02:10:40 +0000 (0:00:13.952) 0:01:15.066 ******** 2026-01-05 02:11:20.798624 | orchestrator | 2026-01-05 02:11:20.798632 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-01-05 02:11:20.798639 | orchestrator | Monday 05 January 2026 02:10:40 +0000 (0:00:00.073) 0:01:15.139 ******** 2026-01-05 02:11:20.798647 | orchestrator | 2026-01-05 02:11:20.798654 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-01-05 02:11:20.798663 | orchestrator | Monday 05 January 2026 02:10:40 +0000 (0:00:00.071) 0:01:15.211 ******** 2026-01-05 02:11:20.798671 | orchestrator | 2026-01-05 02:11:20.798678 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-api container] **************************** 2026-01-05 02:11:20.798686 | orchestrator | Monday 05 January 2026 02:10:40 +0000 (0:00:00.263) 0:01:15.474 ******** 2026-01-05 02:11:20.798694 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:11:20.798701 | orchestrator | changed: [testbed-node-1] 2026-01-05 02:11:20.798709 | orchestrator | changed: [testbed-node-2] 2026-01-05 02:11:20.798717 | orchestrator | 2026-01-05 02:11:20.798724 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-evaluator container] ********************** 2026-01-05 02:11:20.798732 | orchestrator | Monday 05 January 2026 02:10:51 +0000 (0:00:10.855) 0:01:26.330 ******** 2026-01-05 02:11:20.798740 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:11:20.798768 | orchestrator | changed: [testbed-node-2] 2026-01-05 02:11:20.798776 | orchestrator | changed: [testbed-node-1] 2026-01-05 02:11:20.798784 | orchestrator | 2026-01-05 02:11:20.798792 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-listener container] *********************** 2026-01-05 02:11:20.798800 | orchestrator | Monday 05 January 2026 02:11:01 +0000 (0:00:10.089) 0:01:36.419 ******** 2026-01-05 02:11:20.798807 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:11:20.798814 | orchestrator | changed: [testbed-node-2] 2026-01-05 02:11:20.798821 | orchestrator | changed: [testbed-node-1] 2026-01-05 02:11:20.798827 | orchestrator | 2026-01-05 02:11:20.798834 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-notifier container] *********************** 2026-01-05 02:11:20.798842 | orchestrator | Monday 05 January 2026 02:11:11 +0000 (0:00:10.054) 0:01:46.474 ******** 2026-01-05 02:11:20.798850 | orchestrator | changed: [testbed-node-1] 2026-01-05 02:11:20.798858 | orchestrator | changed: [testbed-node-2] 2026-01-05 02:11:20.798866 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:11:20.798873 | orchestrator | 2026-01-05 02:11:20.798881 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 02:11:20.798891 | orchestrator | testbed-node-0 : ok=23  changed=17  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-05 02:11:20.798901 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-05 02:11:20.798910 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-05 02:11:20.798917 | orchestrator | 2026-01-05 02:11:20.798926 | orchestrator | 2026-01-05 02:11:20.798934 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 02:11:20.798941 | orchestrator | Monday 05 January 2026 02:11:20 +0000 (0:00:08.698) 0:01:55.173 ******** 2026-01-05 02:11:20.798950 | orchestrator | =============================================================================== 2026-01-05 02:11:20.798957 | orchestrator | aodh : Running aodh bootstrap container -------------------------------- 13.95s 2026-01-05 02:11:20.798965 | orchestrator | aodh : Restart aodh-api container -------------------------------------- 10.86s 2026-01-05 02:11:20.798990 | orchestrator | aodh : Restart aodh-evaluator container -------------------------------- 10.09s 2026-01-05 02:11:20.798997 | orchestrator | aodh : Restart aodh-listener container --------------------------------- 10.05s 2026-01-05 02:11:20.799004 | orchestrator | aodh : Copying over aodh.conf ------------------------------------------- 8.82s 2026-01-05 02:11:20.799011 | orchestrator | aodh : Restart aodh-notifier container ---------------------------------- 8.70s 2026-01-05 02:11:20.799018 | orchestrator | service-ks-register : aodh | Creating endpoints ------------------------- 6.79s 2026-01-05 02:11:20.799025 | orchestrator | service-cert-copy : aodh | Copying over extra CA certificates ----------- 4.32s 2026-01-05 02:11:20.799031 | orchestrator | aodh : Copying over config.json files for services ---------------------- 4.25s 2026-01-05 02:11:20.799047 | orchestrator | service-ks-register : aodh | Creating users ----------------------------- 4.04s 2026-01-05 02:11:20.799054 | orchestrator | service-ks-register : aodh | Granting user roles ------------------------ 3.91s 2026-01-05 02:11:20.799061 | orchestrator | service-ks-register : aodh | Creating services -------------------------- 3.74s 2026-01-05 02:11:20.799067 | orchestrator | service-ks-register : aodh | Creating projects -------------------------- 3.59s 2026-01-05 02:11:20.799074 | orchestrator | aodh : Check aodh containers -------------------------------------------- 3.44s 2026-01-05 02:11:20.799080 | orchestrator | service-ks-register : aodh | Creating roles ----------------------------- 3.42s 2026-01-05 02:11:20.799087 | orchestrator | aodh : Creating aodh database user and setting permissions -------------- 2.53s 2026-01-05 02:11:20.799094 | orchestrator | aodh : Creating aodh database ------------------------------------------- 2.29s 2026-01-05 02:11:20.799101 | orchestrator | aodh : Ensuring config directories exist -------------------------------- 2.11s 2026-01-05 02:11:20.799117 | orchestrator | aodh : Copying over wsgi-aodh files for services ------------------------ 1.82s 2026-01-05 02:11:20.799124 | orchestrator | service-cert-copy : aodh | Copying over backend internal TLS key -------- 1.17s 2026-01-05 02:11:23.233780 | orchestrator | 2026-01-05 02:11:23 | INFO  | Task 425d1021-0503-4ce1-9fe6-3521ccb3176f (kolla-ceph-rgw) was prepared for execution. 2026-01-05 02:11:23.233856 | orchestrator | 2026-01-05 02:11:23 | INFO  | It takes a moment until task 425d1021-0503-4ce1-9fe6-3521ccb3176f (kolla-ceph-rgw) has been started and output is visible here. 2026-01-05 02:11:59.127355 | orchestrator | 2026-01-05 02:11:59.127458 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 02:11:59.127468 | orchestrator | 2026-01-05 02:11:59.127474 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 02:11:59.127481 | orchestrator | Monday 05 January 2026 02:11:27 +0000 (0:00:00.279) 0:00:00.279 ******** 2026-01-05 02:11:59.127487 | orchestrator | ok: [testbed-manager] 2026-01-05 02:11:59.127494 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:11:59.127500 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:11:59.127506 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:11:59.127513 | orchestrator | ok: [testbed-node-3] 2026-01-05 02:11:59.127519 | orchestrator | ok: [testbed-node-4] 2026-01-05 02:11:59.127525 | orchestrator | ok: [testbed-node-5] 2026-01-05 02:11:59.127531 | orchestrator | 2026-01-05 02:11:59.127537 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 02:11:59.127543 | orchestrator | Monday 05 January 2026 02:11:28 +0000 (0:00:00.833) 0:00:01.112 ******** 2026-01-05 02:11:59.127549 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-01-05 02:11:59.127555 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-01-05 02:11:59.127561 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-01-05 02:11:59.127567 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-01-05 02:11:59.127573 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-01-05 02:11:59.127579 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-01-05 02:11:59.127585 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-01-05 02:11:59.127591 | orchestrator | 2026-01-05 02:11:59.127596 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-01-05 02:11:59.127602 | orchestrator | 2026-01-05 02:11:59.127608 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-01-05 02:11:59.127614 | orchestrator | Monday 05 January 2026 02:11:29 +0000 (0:00:00.937) 0:00:02.050 ******** 2026-01-05 02:11:59.127620 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 02:11:59.127628 | orchestrator | 2026-01-05 02:11:59.127634 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-01-05 02:11:59.127640 | orchestrator | Monday 05 January 2026 02:11:30 +0000 (0:00:01.448) 0:00:03.498 ******** 2026-01-05 02:11:59.127646 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-01-05 02:11:59.127651 | orchestrator | 2026-01-05 02:11:59.127657 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-01-05 02:11:59.127662 | orchestrator | Monday 05 January 2026 02:11:34 +0000 (0:00:03.824) 0:00:07.322 ******** 2026-01-05 02:11:59.127669 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-01-05 02:11:59.127678 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-01-05 02:11:59.127684 | orchestrator | 2026-01-05 02:11:59.127689 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-01-05 02:11:59.127695 | orchestrator | Monday 05 January 2026 02:11:40 +0000 (0:00:06.233) 0:00:13.556 ******** 2026-01-05 02:11:59.127701 | orchestrator | ok: [testbed-manager] => (item=service) 2026-01-05 02:11:59.127727 | orchestrator | 2026-01-05 02:11:59.127733 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-01-05 02:11:59.127739 | orchestrator | Monday 05 January 2026 02:11:43 +0000 (0:00:03.121) 0:00:16.677 ******** 2026-01-05 02:11:59.127745 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-05 02:11:59.127751 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-01-05 02:11:59.127757 | orchestrator | 2026-01-05 02:11:59.127762 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-01-05 02:11:59.127768 | orchestrator | Monday 05 January 2026 02:11:47 +0000 (0:00:03.765) 0:00:20.443 ******** 2026-01-05 02:11:59.127773 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-01-05 02:11:59.127792 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-01-05 02:11:59.127798 | orchestrator | 2026-01-05 02:11:59.127804 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-01-05 02:11:59.127809 | orchestrator | Monday 05 January 2026 02:11:53 +0000 (0:00:06.081) 0:00:26.524 ******** 2026-01-05 02:11:59.127815 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-01-05 02:11:59.127820 | orchestrator | 2026-01-05 02:11:59.127826 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 02:11:59.127832 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 02:11:59.127838 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 02:11:59.127843 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 02:11:59.127849 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 02:11:59.127855 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 02:11:59.127877 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 02:11:59.127883 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 02:11:59.127889 | orchestrator | 2026-01-05 02:11:59.127895 | orchestrator | 2026-01-05 02:11:59.127901 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 02:11:59.127906 | orchestrator | Monday 05 January 2026 02:11:58 +0000 (0:00:04.882) 0:00:31.407 ******** 2026-01-05 02:11:59.127912 | orchestrator | =============================================================================== 2026-01-05 02:11:59.127918 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.23s 2026-01-05 02:11:59.127924 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.08s 2026-01-05 02:11:59.127929 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.88s 2026-01-05 02:11:59.127935 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.82s 2026-01-05 02:11:59.127941 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.77s 2026-01-05 02:11:59.127946 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.12s 2026-01-05 02:11:59.127952 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.45s 2026-01-05 02:11:59.127958 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.94s 2026-01-05 02:11:59.127964 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.83s 2026-01-05 02:11:59.438342 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-01-05 02:11:59.438418 | orchestrator | + sh -c /opt/configuration/scripts/deploy/310-openstack-extended.sh 2026-01-05 02:12:01.496965 | orchestrator | 2026-01-05 02:12:01 | INFO  | Task ce080e1c-0f6b-4237-8692-158462dd73c4 (gnocchi) was prepared for execution. 2026-01-05 02:12:01.497066 | orchestrator | 2026-01-05 02:12:01 | INFO  | It takes a moment until task ce080e1c-0f6b-4237-8692-158462dd73c4 (gnocchi) has been started and output is visible here. 2026-01-05 02:12:06.758496 | orchestrator | 2026-01-05 02:12:06.758578 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 02:12:06.758586 | orchestrator | 2026-01-05 02:12:06.758591 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 02:12:06.758596 | orchestrator | Monday 05 January 2026 02:12:05 +0000 (0:00:00.278) 0:00:00.278 ******** 2026-01-05 02:12:06.758601 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:12:06.758607 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:12:06.758612 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:12:06.758616 | orchestrator | 2026-01-05 02:12:06.758621 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 02:12:06.758625 | orchestrator | Monday 05 January 2026 02:12:06 +0000 (0:00:00.337) 0:00:00.615 ******** 2026-01-05 02:12:06.758631 | orchestrator | ok: [testbed-node-0] => (item=enable_gnocchi_False) 2026-01-05 02:12:06.758636 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_gnocchi_True 2026-01-05 02:12:06.758641 | orchestrator | ok: [testbed-node-1] => (item=enable_gnocchi_False) 2026-01-05 02:12:06.758645 | orchestrator | ok: [testbed-node-2] => (item=enable_gnocchi_False) 2026-01-05 02:12:06.758649 | orchestrator | 2026-01-05 02:12:06.758654 | orchestrator | PLAY [Apply role gnocchi] ****************************************************** 2026-01-05 02:12:06.758658 | orchestrator | skipping: no hosts matched 2026-01-05 02:12:06.758663 | orchestrator | 2026-01-05 02:12:06.758667 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 02:12:06.758673 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 02:12:06.758679 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 02:12:06.758698 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 02:12:06.758703 | orchestrator | 2026-01-05 02:12:06.758707 | orchestrator | 2026-01-05 02:12:06.758711 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 02:12:06.758716 | orchestrator | Monday 05 January 2026 02:12:06 +0000 (0:00:00.374) 0:00:00.990 ******** 2026-01-05 02:12:06.758720 | orchestrator | =============================================================================== 2026-01-05 02:12:06.758724 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.37s 2026-01-05 02:12:06.758728 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2026-01-05 02:12:09.098512 | orchestrator | 2026-01-05 02:12:09 | INFO  | Task 95a1a9f4-eaf1-4073-80b1-15332eee09e1 (manila) was prepared for execution. 2026-01-05 02:12:09.098588 | orchestrator | 2026-01-05 02:12:09 | INFO  | It takes a moment until task 95a1a9f4-eaf1-4073-80b1-15332eee09e1 (manila) has been started and output is visible here. 2026-01-05 02:12:53.630618 | orchestrator | 2026-01-05 02:12:53.630724 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 02:12:53.630738 | orchestrator | 2026-01-05 02:12:53.630746 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 02:12:53.630755 | orchestrator | Monday 05 January 2026 02:12:13 +0000 (0:00:00.269) 0:00:00.269 ******** 2026-01-05 02:12:53.630764 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:12:53.630774 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:12:53.630782 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:12:53.630790 | orchestrator | 2026-01-05 02:12:53.630799 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 02:12:53.630829 | orchestrator | Monday 05 January 2026 02:12:13 +0000 (0:00:00.348) 0:00:00.617 ******** 2026-01-05 02:12:53.630837 | orchestrator | ok: [testbed-node-0] => (item=enable_manila_True) 2026-01-05 02:12:53.630846 | orchestrator | ok: [testbed-node-1] => (item=enable_manila_True) 2026-01-05 02:12:53.630854 | orchestrator | ok: [testbed-node-2] => (item=enable_manila_True) 2026-01-05 02:12:53.630861 | orchestrator | 2026-01-05 02:12:53.630869 | orchestrator | PLAY [Apply role manila] ******************************************************* 2026-01-05 02:12:53.630877 | orchestrator | 2026-01-05 02:12:53.630885 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-01-05 02:12:53.630893 | orchestrator | Monday 05 January 2026 02:12:14 +0000 (0:00:00.457) 0:00:01.074 ******** 2026-01-05 02:12:53.630901 | orchestrator | included: /ansible/roles/manila/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 02:12:53.630911 | orchestrator | 2026-01-05 02:12:53.630919 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-01-05 02:12:53.630927 | orchestrator | Monday 05 January 2026 02:12:14 +0000 (0:00:00.577) 0:00:01.652 ******** 2026-01-05 02:12:53.630935 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:12:53.630944 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:12:53.630972 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:12:53.630980 | orchestrator | 2026-01-05 02:12:53.630988 | orchestrator | TASK [service-ks-register : manila | Creating services] ************************ 2026-01-05 02:12:53.630996 | orchestrator | Monday 05 January 2026 02:12:15 +0000 (0:00:00.457) 0:00:02.109 ******** 2026-01-05 02:12:53.631004 | orchestrator | changed: [testbed-node-0] => (item=manila (share)) 2026-01-05 02:12:53.631012 | orchestrator | changed: [testbed-node-0] => (item=manilav2 (sharev2)) 2026-01-05 02:12:53.631020 | orchestrator | 2026-01-05 02:12:53.631028 | orchestrator | TASK [service-ks-register : manila | Creating endpoints] *********************** 2026-01-05 02:12:53.631036 | orchestrator | Monday 05 January 2026 02:12:22 +0000 (0:00:07.279) 0:00:09.389 ******** 2026-01-05 02:12:53.631044 | orchestrator | changed: [testbed-node-0] => (item=manila -> https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s -> internal) 2026-01-05 02:12:53.631053 | orchestrator | changed: [testbed-node-0] => (item=manila -> https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s -> public) 2026-01-05 02:12:53.631061 | orchestrator | changed: [testbed-node-0] => (item=manilav2 -> https://api-int.testbed.osism.xyz:8786/v2 -> internal) 2026-01-05 02:12:53.631069 | orchestrator | changed: [testbed-node-0] => (item=manilav2 -> https://api.testbed.osism.xyz:8786/v2 -> public) 2026-01-05 02:12:53.631077 | orchestrator | 2026-01-05 02:12:53.631085 | orchestrator | TASK [service-ks-register : manila | Creating projects] ************************ 2026-01-05 02:12:53.631093 | orchestrator | Monday 05 January 2026 02:12:36 +0000 (0:00:13.897) 0:00:23.286 ******** 2026-01-05 02:12:53.631100 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-05 02:12:53.631108 | orchestrator | 2026-01-05 02:12:53.631116 | orchestrator | TASK [service-ks-register : manila | Creating users] *************************** 2026-01-05 02:12:53.631124 | orchestrator | Monday 05 January 2026 02:12:39 +0000 (0:00:03.436) 0:00:26.722 ******** 2026-01-05 02:12:53.631131 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-05 02:12:53.631139 | orchestrator | changed: [testbed-node-0] => (item=manila -> service) 2026-01-05 02:12:53.631147 | orchestrator | 2026-01-05 02:12:53.631155 | orchestrator | TASK [service-ks-register : manila | Creating roles] *************************** 2026-01-05 02:12:53.631162 | orchestrator | Monday 05 January 2026 02:12:43 +0000 (0:00:03.957) 0:00:30.680 ******** 2026-01-05 02:12:53.631170 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-05 02:12:53.631195 | orchestrator | 2026-01-05 02:12:53.631204 | orchestrator | TASK [service-ks-register : manila | Granting user roles] ********************** 2026-01-05 02:12:53.631212 | orchestrator | Monday 05 January 2026 02:12:47 +0000 (0:00:03.538) 0:00:34.218 ******** 2026-01-05 02:12:53.631220 | orchestrator | changed: [testbed-node-0] => (item=manila -> service -> admin) 2026-01-05 02:12:53.631234 | orchestrator | 2026-01-05 02:12:53.631244 | orchestrator | TASK [manila : Ensuring config directories exist] ****************************** 2026-01-05 02:12:53.631252 | orchestrator | Monday 05 January 2026 02:12:51 +0000 (0:00:03.977) 0:00:38.195 ******** 2026-01-05 02:12:53.631297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-05 02:12:53.631312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-05 02:12:53.631322 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-05 02:12:53.631333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 02:12:53.631342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 02:12:53.631360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 02:12:53.631378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-01-05 02:13:04.543340 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-01-05 02:13:04.543457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-01-05 02:13:04.543470 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-01-05 02:13:04.543478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-01-05 02:13:04.543517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-01-05 02:13:04.543526 | orchestrator | 2026-01-05 02:13:04.543534 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-01-05 02:13:04.543543 | orchestrator | Monday 05 January 2026 02:12:53 +0000 (0:00:02.415) 0:00:40.611 ******** 2026-01-05 02:13:04.543550 | orchestrator | included: /ansible/roles/manila/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 02:13:04.543557 | orchestrator | 2026-01-05 02:13:04.543564 | orchestrator | TASK [manila : Ensuring manila service ceph config subdir exists] ************** 2026-01-05 02:13:04.543571 | orchestrator | Monday 05 January 2026 02:12:54 +0000 (0:00:00.601) 0:00:41.212 ******** 2026-01-05 02:13:04.543577 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:13:04.543585 | orchestrator | changed: [testbed-node-1] 2026-01-05 02:13:04.543592 | orchestrator | changed: [testbed-node-2] 2026-01-05 02:13:04.543598 | orchestrator | 2026-01-05 02:13:04.543605 | orchestrator | TASK [manila : Copy over multiple ceph configs for Manila] ********************* 2026-01-05 02:13:04.543612 | orchestrator | Monday 05 January 2026 02:12:55 +0000 (0:00:01.023) 0:00:42.236 ******** 2026-01-05 02:13:04.543620 | orchestrator | changed: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-01-05 02:13:04.543643 | orchestrator | changed: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-01-05 02:13:04.543651 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-01-05 02:13:04.543658 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-01-05 02:13:04.543665 | orchestrator | changed: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-01-05 02:13:04.543671 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-01-05 02:13:04.543678 | orchestrator | 2026-01-05 02:13:04.543685 | orchestrator | TASK [manila : Copy over ceph Manila keyrings] ********************************* 2026-01-05 02:13:04.543691 | orchestrator | Monday 05 January 2026 02:12:57 +0000 (0:00:01.825) 0:00:44.061 ******** 2026-01-05 02:13:04.543698 | orchestrator | changed: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-01-05 02:13:04.543705 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-01-05 02:13:04.543712 | orchestrator | changed: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-01-05 02:13:04.543718 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-01-05 02:13:04.543731 | orchestrator | changed: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-01-05 02:13:04.543738 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-01-05 02:13:04.543745 | orchestrator | 2026-01-05 02:13:04.543752 | orchestrator | TASK [manila : Ensuring config directory has correct owner and permission] ***** 2026-01-05 02:13:04.543758 | orchestrator | Monday 05 January 2026 02:12:58 +0000 (0:00:01.300) 0:00:45.361 ******** 2026-01-05 02:13:04.543766 | orchestrator | ok: [testbed-node-0] => (item=manila-share) 2026-01-05 02:13:04.543773 | orchestrator | ok: [testbed-node-1] => (item=manila-share) 2026-01-05 02:13:04.543780 | orchestrator | ok: [testbed-node-2] => (item=manila-share) 2026-01-05 02:13:04.543786 | orchestrator | 2026-01-05 02:13:04.543793 | orchestrator | TASK [manila : Check if policies shall be overwritten] ************************* 2026-01-05 02:13:04.543800 | orchestrator | Monday 05 January 2026 02:12:59 +0000 (0:00:00.715) 0:00:46.077 ******** 2026-01-05 02:13:04.543806 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:13:04.543813 | orchestrator | 2026-01-05 02:13:04.543820 | orchestrator | TASK [manila : Set manila policy file] ***************************************** 2026-01-05 02:13:04.543827 | orchestrator | Monday 05 January 2026 02:12:59 +0000 (0:00:00.142) 0:00:46.220 ******** 2026-01-05 02:13:04.543850 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:13:04.543866 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:13:04.543874 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:13:04.543882 | orchestrator | 2026-01-05 02:13:04.543890 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-01-05 02:13:04.543902 | orchestrator | Monday 05 January 2026 02:12:59 +0000 (0:00:00.546) 0:00:46.766 ******** 2026-01-05 02:13:04.543911 | orchestrator | included: /ansible/roles/manila/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 02:13:04.543919 | orchestrator | 2026-01-05 02:13:04.543928 | orchestrator | TASK [service-cert-copy : manila | Copying over extra CA certificates] ********* 2026-01-05 02:13:04.543936 | orchestrator | Monday 05 January 2026 02:13:00 +0000 (0:00:00.606) 0:00:47.373 ******** 2026-01-05 02:13:04.543950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-05 02:13:05.438827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-05 02:13:05.438937 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-05 02:13:05.438949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 02:13:05.438972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 02:13:05.438980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 02:13:05.439002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-01-05 02:13:05.439011 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-01-05 02:13:05.439023 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-01-05 02:13:05.439031 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-01-05 02:13:05.439038 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-01-05 02:13:05.439049 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-01-05 02:13:05.439057 | orchestrator | 2026-01-05 02:13:05.439066 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS certificate] *** 2026-01-05 02:13:05.439074 | orchestrator | Monday 05 January 2026 02:13:04 +0000 (0:00:04.160) 0:00:51.533 ******** 2026-01-05 02:13:05.439088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-05 02:13:06.129041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 02:13:06.129280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-05 02:13:06.129303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-05 02:13:06.129331 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:13:06.129362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-05 02:13:06.129376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 02:13:06.129388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-05 02:13:06.129428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-05 02:13:06.129441 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:13:06.129453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-05 02:13:06.129466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 02:13:06.129483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-05 02:13:06.129511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-05 02:13:06.129522 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:13:06.129546 | orchestrator | 2026-01-05 02:13:06.129561 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS key] ****** 2026-01-05 02:13:06.129576 | orchestrator | Monday 05 January 2026 02:13:05 +0000 (0:00:00.918) 0:00:52.452 ******** 2026-01-05 02:13:06.129609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-05 02:13:10.828456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 02:13:10.828566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-05 02:13:10.828582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-05 02:13:10.828592 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:13:10.828618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-05 02:13:10.828628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 02:13:10.828659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-05 02:13:10.828685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-05 02:13:10.828694 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:13:10.828702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-05 02:13:10.828711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 02:13:10.828723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-05 02:13:10.828728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-05 02:13:10.828738 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:13:10.828743 | orchestrator | 2026-01-05 02:13:10.828748 | orchestrator | TASK [manila : Copying over config.json files for services] ******************** 2026-01-05 02:13:10.828754 | orchestrator | Monday 05 January 2026 02:13:06 +0000 (0:00:00.884) 0:00:53.337 ******** 2026-01-05 02:13:10.828765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-05 02:13:17.792057 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-05 02:13:17.792260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-05 02:13:17.792312 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 02:13:17.792333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 02:13:17.792369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 02:13:17.792400 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-01-05 02:13:17.792413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-01-05 02:13:17.792424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-01-05 02:13:17.792440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-01-05 02:13:17.792450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-01-05 02:13:17.792467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-01-05 02:13:17.792478 | orchestrator | 2026-01-05 02:13:17.792490 | orchestrator | TASK [manila : Copying over manila.conf] *************************************** 2026-01-05 02:13:17.792502 | orchestrator | Monday 05 January 2026 02:13:11 +0000 (0:00:04.680) 0:00:58.018 ******** 2026-01-05 02:13:17.792528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-05 02:13:22.066676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-05 02:13:22.066772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-05 02:13:22.066797 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 02:13:22.066821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-05 02:13:22.066826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 02:13:22.066844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-05 02:13:22.066848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 02:13:22.066853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-05 02:13:22.066860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-01-05 02:13:22.066869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-01-05 02:13:22.066873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-01-05 02:13:22.066878 | orchestrator | 2026-01-05 02:13:22.066883 | orchestrator | TASK [manila : Copying over manila-share.conf] ********************************* 2026-01-05 02:13:22.066889 | orchestrator | Monday 05 January 2026 02:13:17 +0000 (0:00:06.760) 0:01:04.779 ******** 2026-01-05 02:13:22.066894 | orchestrator | changed: [testbed-node-0] => (item=manila-share) 2026-01-05 02:13:22.066898 | orchestrator | changed: [testbed-node-1] => (item=manila-share) 2026-01-05 02:13:22.066902 | orchestrator | changed: [testbed-node-2] => (item=manila-share) 2026-01-05 02:13:22.066906 | orchestrator | 2026-01-05 02:13:22.066910 | orchestrator | TASK [manila : Copying over existing policy file] ****************************** 2026-01-05 02:13:22.066914 | orchestrator | Monday 05 January 2026 02:13:21 +0000 (0:00:03.644) 0:01:08.423 ******** 2026-01-05 02:13:22.066923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-05 02:13:25.586498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 02:13:25.586605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-05 02:13:25.586613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-05 02:13:25.586618 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:13:25.586624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-05 02:13:25.586629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 02:13:25.586634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-05 02:13:25.586647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-05 02:13:25.586655 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:13:25.586662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-05 02:13:25.586666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 02:13:25.586670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-05 02:13:25.586674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-05 02:13:25.586678 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:13:25.586683 | orchestrator | 2026-01-05 02:13:25.586688 | orchestrator | TASK [manila : Check manila containers] **************************************** 2026-01-05 02:13:25.586693 | orchestrator | Monday 05 January 2026 02:13:22 +0000 (0:00:00.653) 0:01:09.077 ******** 2026-01-05 02:13:25.586701 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-05 02:14:07.898525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-05 02:14:07.898626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-05 02:14:07.898636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 02:14:07.898643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 02:14:07.898649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-01-05 02:14:07.898666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-01-05 02:14:07.898699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-01-05 02:14:07.898708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-01-05 02:14:07.898721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-01-05 02:14:07.898726 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-01-05 02:14:07.898731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-01-05 02:14:07.898736 | orchestrator | 2026-01-05 02:14:07.898743 | orchestrator | TASK [manila : Creating Manila database] *************************************** 2026-01-05 02:14:07.898749 | orchestrator | Monday 05 January 2026 02:13:25 +0000 (0:00:03.505) 0:01:12.583 ******** 2026-01-05 02:14:07.898759 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:14:07.898765 | orchestrator | 2026-01-05 02:14:07.898770 | orchestrator | TASK [manila : Creating Manila database user and setting permissions] ********** 2026-01-05 02:14:07.898774 | orchestrator | Monday 05 January 2026 02:13:28 +0000 (0:00:02.374) 0:01:14.957 ******** 2026-01-05 02:14:07.898779 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:14:07.898784 | orchestrator | 2026-01-05 02:14:07.898788 | orchestrator | TASK [manila : Running Manila bootstrap container] ***************************** 2026-01-05 02:14:07.898793 | orchestrator | Monday 05 January 2026 02:13:30 +0000 (0:00:02.483) 0:01:17.441 ******** 2026-01-05 02:14:07.898797 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:14:07.898802 | orchestrator | 2026-01-05 02:14:07.898807 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-01-05 02:14:07.898811 | orchestrator | Monday 05 January 2026 02:14:07 +0000 (0:00:37.122) 0:01:54.563 ******** 2026-01-05 02:14:07.898816 | orchestrator | 2026-01-05 02:14:07.898825 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-01-05 02:14:50.802318 | orchestrator | Monday 05 January 2026 02:14:07 +0000 (0:00:00.071) 0:01:54.635 ******** 2026-01-05 02:14:50.802442 | orchestrator | 2026-01-05 02:14:50.802454 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-01-05 02:14:50.802460 | orchestrator | Monday 05 January 2026 02:14:07 +0000 (0:00:00.071) 0:01:54.706 ******** 2026-01-05 02:14:50.802466 | orchestrator | 2026-01-05 02:14:50.802471 | orchestrator | RUNNING HANDLER [manila : Restart manila-api container] ************************ 2026-01-05 02:14:50.802477 | orchestrator | Monday 05 January 2026 02:14:07 +0000 (0:00:00.073) 0:01:54.780 ******** 2026-01-05 02:14:50.802483 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:14:50.802489 | orchestrator | changed: [testbed-node-1] 2026-01-05 02:14:50.802494 | orchestrator | changed: [testbed-node-2] 2026-01-05 02:14:50.802499 | orchestrator | 2026-01-05 02:14:50.802505 | orchestrator | RUNNING HANDLER [manila : Restart manila-data container] *********************** 2026-01-05 02:14:50.802510 | orchestrator | Monday 05 January 2026 02:14:22 +0000 (0:00:14.641) 0:02:09.422 ******** 2026-01-05 02:14:50.802515 | orchestrator | changed: [testbed-node-1] 2026-01-05 02:14:50.802520 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:14:50.802526 | orchestrator | changed: [testbed-node-2] 2026-01-05 02:14:50.802531 | orchestrator | 2026-01-05 02:14:50.802551 | orchestrator | RUNNING HANDLER [manila : Restart manila-scheduler container] ****************** 2026-01-05 02:14:50.802556 | orchestrator | Monday 05 January 2026 02:14:28 +0000 (0:00:05.917) 0:02:15.339 ******** 2026-01-05 02:14:50.802561 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:14:50.802566 | orchestrator | changed: [testbed-node-2] 2026-01-05 02:14:50.802572 | orchestrator | changed: [testbed-node-1] 2026-01-05 02:14:50.802577 | orchestrator | 2026-01-05 02:14:50.802582 | orchestrator | RUNNING HANDLER [manila : Restart manila-share container] ********************** 2026-01-05 02:14:50.802587 | orchestrator | Monday 05 January 2026 02:14:38 +0000 (0:00:10.277) 0:02:25.617 ******** 2026-01-05 02:14:50.802592 | orchestrator | changed: [testbed-node-2] 2026-01-05 02:14:50.802597 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:14:50.802602 | orchestrator | changed: [testbed-node-1] 2026-01-05 02:14:50.802608 | orchestrator | 2026-01-05 02:14:50.802613 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 02:14:50.802620 | orchestrator | testbed-node-0 : ok=28  changed=20  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-05 02:14:50.802628 | orchestrator | testbed-node-1 : ok=19  changed=13  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-05 02:14:50.802634 | orchestrator | testbed-node-2 : ok=19  changed=13  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-05 02:14:50.802639 | orchestrator | 2026-01-05 02:14:50.802645 | orchestrator | 2026-01-05 02:14:50.802653 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 02:14:50.802686 | orchestrator | Monday 05 January 2026 02:14:50 +0000 (0:00:11.612) 0:02:37.230 ******** 2026-01-05 02:14:50.802695 | orchestrator | =============================================================================== 2026-01-05 02:14:50.802704 | orchestrator | manila : Running Manila bootstrap container ---------------------------- 37.12s 2026-01-05 02:14:50.802712 | orchestrator | manila : Restart manila-api container ---------------------------------- 14.64s 2026-01-05 02:14:50.802720 | orchestrator | service-ks-register : manila | Creating endpoints ---------------------- 13.90s 2026-01-05 02:14:50.802728 | orchestrator | manila : Restart manila-share container -------------------------------- 11.61s 2026-01-05 02:14:50.802736 | orchestrator | manila : Restart manila-scheduler container ---------------------------- 10.28s 2026-01-05 02:14:50.802744 | orchestrator | service-ks-register : manila | Creating services ------------------------ 7.28s 2026-01-05 02:14:50.802752 | orchestrator | manila : Copying over manila.conf --------------------------------------- 6.76s 2026-01-05 02:14:50.802761 | orchestrator | manila : Restart manila-data container ---------------------------------- 5.92s 2026-01-05 02:14:50.802769 | orchestrator | manila : Copying over config.json files for services -------------------- 4.68s 2026-01-05 02:14:50.802777 | orchestrator | service-cert-copy : manila | Copying over extra CA certificates --------- 4.16s 2026-01-05 02:14:50.802785 | orchestrator | service-ks-register : manila | Granting user roles ---------------------- 3.98s 2026-01-05 02:14:50.802793 | orchestrator | service-ks-register : manila | Creating users --------------------------- 3.96s 2026-01-05 02:14:50.802801 | orchestrator | manila : Copying over manila-share.conf --------------------------------- 3.64s 2026-01-05 02:14:50.802809 | orchestrator | service-ks-register : manila | Creating roles --------------------------- 3.54s 2026-01-05 02:14:50.802817 | orchestrator | manila : Check manila containers ---------------------------------------- 3.51s 2026-01-05 02:14:50.802825 | orchestrator | service-ks-register : manila | Creating projects ------------------------ 3.44s 2026-01-05 02:14:50.802833 | orchestrator | manila : Creating Manila database user and setting permissions ---------- 2.48s 2026-01-05 02:14:50.802842 | orchestrator | manila : Ensuring config directories exist ------------------------------ 2.42s 2026-01-05 02:14:50.802850 | orchestrator | manila : Creating Manila database --------------------------------------- 2.37s 2026-01-05 02:14:50.802858 | orchestrator | manila : Copy over multiple ceph configs for Manila --------------------- 1.83s 2026-01-05 02:14:51.143624 | orchestrator | + sh -c /opt/configuration/scripts/deploy/400-monitoring.sh 2026-01-05 02:15:03.449854 | orchestrator | 2026-01-05 02:15:03 | INFO  | Task 102fc256-dd4a-4ba6-822a-6ce066b2b2b0 (netdata) was prepared for execution. 2026-01-05 02:15:03.449951 | orchestrator | 2026-01-05 02:15:03 | INFO  | It takes a moment until task 102fc256-dd4a-4ba6-822a-6ce066b2b2b0 (netdata) has been started and output is visible here. 2026-01-05 02:16:40.140980 | orchestrator | 2026-01-05 02:16:40.141092 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 02:16:40.141110 | orchestrator | 2026-01-05 02:16:40.141122 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 02:16:40.141133 | orchestrator | Monday 05 January 2026 02:15:07 +0000 (0:00:00.251) 0:00:00.252 ******** 2026-01-05 02:16:40.141140 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-01-05 02:16:40.141148 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-01-05 02:16:40.141154 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-01-05 02:16:40.141161 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-01-05 02:16:40.141167 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-01-05 02:16:40.141174 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-01-05 02:16:40.141180 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-01-05 02:16:40.141186 | orchestrator | 2026-01-05 02:16:40.141192 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-01-05 02:16:40.141218 | orchestrator | 2026-01-05 02:16:40.141238 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-01-05 02:16:40.141244 | orchestrator | Monday 05 January 2026 02:15:08 +0000 (0:00:00.935) 0:00:01.187 ******** 2026-01-05 02:16:40.141253 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 02:16:40.141261 | orchestrator | 2026-01-05 02:16:40.141267 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-01-05 02:16:40.141274 | orchestrator | Monday 05 January 2026 02:15:10 +0000 (0:00:01.376) 0:00:02.564 ******** 2026-01-05 02:16:40.141280 | orchestrator | ok: [testbed-manager] 2026-01-05 02:16:40.141288 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:16:40.141294 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:16:40.141300 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:16:40.141306 | orchestrator | ok: [testbed-node-3] 2026-01-05 02:16:40.141313 | orchestrator | ok: [testbed-node-4] 2026-01-05 02:16:40.141319 | orchestrator | ok: [testbed-node-5] 2026-01-05 02:16:40.141325 | orchestrator | 2026-01-05 02:16:40.141331 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-01-05 02:16:40.141337 | orchestrator | Monday 05 January 2026 02:15:12 +0000 (0:00:01.984) 0:00:04.549 ******** 2026-01-05 02:16:40.141344 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:16:40.141350 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:16:40.141356 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:16:40.141362 | orchestrator | ok: [testbed-manager] 2026-01-05 02:16:40.141368 | orchestrator | ok: [testbed-node-3] 2026-01-05 02:16:40.141375 | orchestrator | ok: [testbed-node-4] 2026-01-05 02:16:40.141383 | orchestrator | ok: [testbed-node-5] 2026-01-05 02:16:40.141393 | orchestrator | 2026-01-05 02:16:40.141408 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-01-05 02:16:40.141420 | orchestrator | Monday 05 January 2026 02:15:14 +0000 (0:00:02.344) 0:00:06.894 ******** 2026-01-05 02:16:40.141429 | orchestrator | changed: [testbed-manager] 2026-01-05 02:16:40.141439 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:16:40.141448 | orchestrator | changed: [testbed-node-1] 2026-01-05 02:16:40.141457 | orchestrator | changed: [testbed-node-2] 2026-01-05 02:16:40.141465 | orchestrator | changed: [testbed-node-3] 2026-01-05 02:16:40.141474 | orchestrator | changed: [testbed-node-4] 2026-01-05 02:16:40.141483 | orchestrator | changed: [testbed-node-5] 2026-01-05 02:16:40.141492 | orchestrator | 2026-01-05 02:16:40.141500 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-01-05 02:16:40.141509 | orchestrator | Monday 05 January 2026 02:15:16 +0000 (0:00:01.724) 0:00:08.619 ******** 2026-01-05 02:16:40.141519 | orchestrator | changed: [testbed-manager] 2026-01-05 02:16:40.141528 | orchestrator | changed: [testbed-node-3] 2026-01-05 02:16:40.141539 | orchestrator | changed: [testbed-node-4] 2026-01-05 02:16:40.141549 | orchestrator | changed: [testbed-node-5] 2026-01-05 02:16:40.141558 | orchestrator | changed: [testbed-node-2] 2026-01-05 02:16:40.141567 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:16:40.141576 | orchestrator | changed: [testbed-node-1] 2026-01-05 02:16:40.141585 | orchestrator | 2026-01-05 02:16:40.141594 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-01-05 02:16:40.141603 | orchestrator | Monday 05 January 2026 02:15:34 +0000 (0:00:18.741) 0:00:27.360 ******** 2026-01-05 02:16:40.141611 | orchestrator | changed: [testbed-node-4] 2026-01-05 02:16:40.141621 | orchestrator | changed: [testbed-manager] 2026-01-05 02:16:40.141629 | orchestrator | changed: [testbed-node-5] 2026-01-05 02:16:40.141638 | orchestrator | changed: [testbed-node-3] 2026-01-05 02:16:40.141646 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:16:40.141655 | orchestrator | changed: [testbed-node-2] 2026-01-05 02:16:40.141664 | orchestrator | changed: [testbed-node-1] 2026-01-05 02:16:40.141673 | orchestrator | 2026-01-05 02:16:40.141682 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-01-05 02:16:40.141702 | orchestrator | Monday 05 January 2026 02:16:14 +0000 (0:00:39.475) 0:01:06.836 ******** 2026-01-05 02:16:40.141712 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 02:16:40.141723 | orchestrator | 2026-01-05 02:16:40.141733 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-01-05 02:16:40.141743 | orchestrator | Monday 05 January 2026 02:16:16 +0000 (0:00:01.601) 0:01:08.438 ******** 2026-01-05 02:16:40.141752 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-01-05 02:16:40.141761 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-01-05 02:16:40.141770 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-01-05 02:16:40.141779 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-01-05 02:16:40.141808 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-01-05 02:16:40.141819 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-01-05 02:16:40.141829 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-01-05 02:16:40.141838 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-01-05 02:16:40.141849 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-01-05 02:16:40.141858 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-01-05 02:16:40.141869 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-01-05 02:16:40.141879 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-01-05 02:16:40.141889 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-01-05 02:16:40.141899 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-01-05 02:16:40.141909 | orchestrator | 2026-01-05 02:16:40.141919 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-01-05 02:16:40.141931 | orchestrator | Monday 05 January 2026 02:16:19 +0000 (0:00:03.588) 0:01:12.026 ******** 2026-01-05 02:16:40.141942 | orchestrator | ok: [testbed-manager] 2026-01-05 02:16:40.141953 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:16:40.141972 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:16:40.141979 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:16:40.141985 | orchestrator | ok: [testbed-node-3] 2026-01-05 02:16:40.141991 | orchestrator | ok: [testbed-node-4] 2026-01-05 02:16:40.141998 | orchestrator | ok: [testbed-node-5] 2026-01-05 02:16:40.142004 | orchestrator | 2026-01-05 02:16:40.142010 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-01-05 02:16:40.142103 | orchestrator | Monday 05 January 2026 02:16:21 +0000 (0:00:01.372) 0:01:13.399 ******** 2026-01-05 02:16:40.142111 | orchestrator | changed: [testbed-manager] 2026-01-05 02:16:40.142117 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:16:40.142123 | orchestrator | changed: [testbed-node-1] 2026-01-05 02:16:40.142129 | orchestrator | changed: [testbed-node-2] 2026-01-05 02:16:40.142135 | orchestrator | changed: [testbed-node-3] 2026-01-05 02:16:40.142141 | orchestrator | changed: [testbed-node-4] 2026-01-05 02:16:40.142147 | orchestrator | changed: [testbed-node-5] 2026-01-05 02:16:40.142153 | orchestrator | 2026-01-05 02:16:40.142160 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-01-05 02:16:40.142166 | orchestrator | Monday 05 January 2026 02:16:22 +0000 (0:00:01.455) 0:01:14.854 ******** 2026-01-05 02:16:40.142172 | orchestrator | ok: [testbed-manager] 2026-01-05 02:16:40.142178 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:16:40.142184 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:16:40.142190 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:16:40.142197 | orchestrator | ok: [testbed-node-3] 2026-01-05 02:16:40.142203 | orchestrator | ok: [testbed-node-4] 2026-01-05 02:16:40.142209 | orchestrator | ok: [testbed-node-5] 2026-01-05 02:16:40.142215 | orchestrator | 2026-01-05 02:16:40.142221 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-01-05 02:16:40.142235 | orchestrator | Monday 05 January 2026 02:16:23 +0000 (0:00:01.139) 0:01:15.993 ******** 2026-01-05 02:16:40.142241 | orchestrator | ok: [testbed-manager] 2026-01-05 02:16:40.142247 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:16:40.142254 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:16:40.142260 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:16:40.142266 | orchestrator | ok: [testbed-node-3] 2026-01-05 02:16:40.142272 | orchestrator | ok: [testbed-node-4] 2026-01-05 02:16:40.142278 | orchestrator | ok: [testbed-node-5] 2026-01-05 02:16:40.142284 | orchestrator | 2026-01-05 02:16:40.142290 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-01-05 02:16:40.142296 | orchestrator | Monday 05 January 2026 02:16:25 +0000 (0:00:01.570) 0:01:17.564 ******** 2026-01-05 02:16:40.142303 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-01-05 02:16:40.142312 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 02:16:40.142319 | orchestrator | 2026-01-05 02:16:40.142326 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-01-05 02:16:40.142332 | orchestrator | Monday 05 January 2026 02:16:26 +0000 (0:00:01.318) 0:01:18.882 ******** 2026-01-05 02:16:40.142338 | orchestrator | changed: [testbed-manager] 2026-01-05 02:16:40.142344 | orchestrator | 2026-01-05 02:16:40.142351 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-01-05 02:16:40.142357 | orchestrator | Monday 05 January 2026 02:16:28 +0000 (0:00:01.868) 0:01:20.751 ******** 2026-01-05 02:16:40.142363 | orchestrator | changed: [testbed-manager] 2026-01-05 02:16:40.142369 | orchestrator | changed: [testbed-node-5] 2026-01-05 02:16:40.142375 | orchestrator | changed: [testbed-node-4] 2026-01-05 02:16:40.142381 | orchestrator | changed: [testbed-node-3] 2026-01-05 02:16:40.142387 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:16:40.142393 | orchestrator | changed: [testbed-node-1] 2026-01-05 02:16:40.142400 | orchestrator | changed: [testbed-node-2] 2026-01-05 02:16:40.142406 | orchestrator | 2026-01-05 02:16:40.142412 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 02:16:40.142418 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 02:16:40.142433 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 02:16:40.142440 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 02:16:40.142446 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 02:16:40.142461 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 02:16:40.660267 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 02:16:40.660365 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 02:16:40.660376 | orchestrator | 2026-01-05 02:16:40.660383 | orchestrator | 2026-01-05 02:16:40.660390 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 02:16:40.660399 | orchestrator | Monday 05 January 2026 02:16:40 +0000 (0:00:11.764) 0:01:32.515 ******** 2026-01-05 02:16:40.660406 | orchestrator | =============================================================================== 2026-01-05 02:16:40.660412 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 39.48s 2026-01-05 02:16:40.660444 | orchestrator | osism.services.netdata : Add repository -------------------------------- 18.74s 2026-01-05 02:16:40.660451 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.76s 2026-01-05 02:16:40.660471 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 3.59s 2026-01-05 02:16:40.660478 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 2.34s 2026-01-05 02:16:40.660485 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.98s 2026-01-05 02:16:40.660492 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.87s 2026-01-05 02:16:40.660498 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 1.72s 2026-01-05 02:16:40.660504 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.60s 2026-01-05 02:16:40.660511 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.57s 2026-01-05 02:16:40.660517 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.46s 2026-01-05 02:16:40.660523 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.38s 2026-01-05 02:16:40.660529 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.37s 2026-01-05 02:16:40.660537 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.32s 2026-01-05 02:16:40.660543 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.14s 2026-01-05 02:16:40.660549 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.94s 2026-01-05 02:16:43.551000 | orchestrator | 2026-01-05 02:16:43 | INFO  | Task c0b57b67-b7a1-45ac-91ae-668c68490c09 (prometheus) was prepared for execution. 2026-01-05 02:16:43.551128 | orchestrator | 2026-01-05 02:16:43 | INFO  | It takes a moment until task c0b57b67-b7a1-45ac-91ae-668c68490c09 (prometheus) has been started and output is visible here. 2026-01-05 02:16:52.913775 | orchestrator | 2026-01-05 02:16:52.913879 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 02:16:52.913893 | orchestrator | 2026-01-05 02:16:52.913902 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 02:16:52.913913 | orchestrator | Monday 05 January 2026 02:16:47 +0000 (0:00:00.252) 0:00:00.252 ******** 2026-01-05 02:16:52.913921 | orchestrator | ok: [testbed-manager] 2026-01-05 02:16:52.913931 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:16:52.913939 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:16:52.913948 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:16:52.913957 | orchestrator | ok: [testbed-node-3] 2026-01-05 02:16:52.913966 | orchestrator | ok: [testbed-node-4] 2026-01-05 02:16:52.913975 | orchestrator | ok: [testbed-node-5] 2026-01-05 02:16:52.913983 | orchestrator | 2026-01-05 02:16:52.913992 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 02:16:52.914001 | orchestrator | Monday 05 January 2026 02:16:48 +0000 (0:00:00.755) 0:00:01.007 ******** 2026-01-05 02:16:52.914010 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-01-05 02:16:52.914129 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-01-05 02:16:52.914135 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-01-05 02:16:52.914141 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-01-05 02:16:52.914146 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-01-05 02:16:52.914152 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-01-05 02:16:52.914157 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-01-05 02:16:52.914162 | orchestrator | 2026-01-05 02:16:52.914168 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-01-05 02:16:52.914173 | orchestrator | 2026-01-05 02:16:52.914178 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-01-05 02:16:52.914206 | orchestrator | Monday 05 January 2026 02:16:49 +0000 (0:00:00.985) 0:00:01.993 ******** 2026-01-05 02:16:52.914212 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 02:16:52.914219 | orchestrator | 2026-01-05 02:16:52.914224 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-01-05 02:16:52.914230 | orchestrator | Monday 05 January 2026 02:16:50 +0000 (0:00:01.407) 0:00:03.401 ******** 2026-01-05 02:16:52.914237 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-05 02:16:52.914258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 02:16:52.914266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 02:16:52.914271 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 02:16:52.914292 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:16:52.914298 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 02:16:52.914310 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 02:16:52.914316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:16:52.914323 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 02:16:52.914334 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 02:16:52.914342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:16:52.914353 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 02:16:54.098767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:16:54.098845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:16:54.098871 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-05 02:16:54.098878 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 02:16:54.098892 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 02:16:54.098897 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-05 02:16:54.098912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:16:54.098916 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 02:16:54.098925 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 02:16:54.098929 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:16:54.098932 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 02:16:54.098939 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-05 02:16:54.098943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 02:16:54.098947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:16:54.098956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:16:58.965997 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-05 02:16:58.966141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:16:58.966148 | orchestrator | 2026-01-05 02:16:58.966154 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-01-05 02:16:58.966160 | orchestrator | Monday 05 January 2026 02:16:54 +0000 (0:00:03.381) 0:00:06.782 ******** 2026-01-05 02:16:58.966165 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 02:16:58.966170 | orchestrator | 2026-01-05 02:16:58.966174 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-01-05 02:16:58.966178 | orchestrator | Monday 05 January 2026 02:16:55 +0000 (0:00:01.495) 0:00:08.278 ******** 2026-01-05 02:16:58.966196 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-05 02:16:58.966202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 02:16:58.966206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 02:16:58.966210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 02:16:58.966243 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 02:16:58.966247 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 02:16:58.966251 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 02:16:58.966255 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 02:16:58.966262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:16:58.966268 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:16:58.966272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:16:58.966280 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 02:16:58.966290 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 02:17:01.352260 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 02:17:01.352366 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 02:17:01.352381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:17:01.352391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:17:01.352497 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:17:01.352512 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-05 02:17:01.352539 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-05 02:17:01.352567 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-05 02:17:01.352577 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-05 02:17:01.352585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 02:17:01.352596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 02:17:01.352603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 02:17:01.352617 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:17:01.352625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:17:01.352639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:17:02.045187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:17:02.045265 | orchestrator | 2026-01-05 02:17:02.045273 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-01-05 02:17:02.045279 | orchestrator | Monday 05 January 2026 02:17:01 +0000 (0:00:05.756) 0:00:14.034 ******** 2026-01-05 02:17:02.045285 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-05 02:17:02.045305 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 02:17:02.045326 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 02:17:02.045333 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-05 02:17:02.045350 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 02:17:02.045355 | orchestrator | skipping: [testbed-manager] 2026-01-05 02:17:02.045360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 02:17:02.045365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 02:17:02.045372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 02:17:02.045376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 02:17:02.045384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 02:17:02.045388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 02:17:02.045392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 02:17:02.045400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 02:17:02.691389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 02:17:02.691486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 02:17:02.691498 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:17:02.691506 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:17:02.691529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 02:17:02.691557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 02:17:02.691564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 02:17:02.691571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 02:17:02.691578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 02:17:02.691585 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:17:02.691607 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 02:17:02.691612 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 02:17:02.691616 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-05 02:17:02.691628 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:17:02.691632 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 02:17:02.691636 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 02:17:02.691640 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-05 02:17:02.691644 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:17:02.691648 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 02:17:02.691656 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 02:17:03.608781 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-05 02:17:03.608863 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:17:03.608872 | orchestrator | 2026-01-05 02:17:03.608878 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-01-05 02:17:03.608884 | orchestrator | Monday 05 January 2026 02:17:02 +0000 (0:00:01.338) 0:00:15.373 ******** 2026-01-05 02:17:03.608920 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-05 02:17:03.608927 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 02:17:03.608933 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 02:17:03.608940 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-05 02:17:03.608959 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 02:17:03.608964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 02:17:03.608979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 02:17:03.608987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 02:17:03.608992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 02:17:03.608997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 02:17:03.609002 | orchestrator | skipping: [testbed-manager] 2026-01-05 02:17:03.609007 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:17:03.609014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 02:17:03.609022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 02:17:03.609062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 02:17:04.613199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 02:17:04.613305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 02:17:04.613324 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:17:04.613330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 02:17:04.613335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 02:17:04.613340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 02:17:04.613345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 02:17:04.613349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 02:17:04.613353 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:17:04.613370 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 02:17:04.613378 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 02:17:04.613385 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-05 02:17:04.613389 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:17:04.613393 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 02:17:04.613397 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 02:17:04.613401 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-05 02:17:04.613405 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:17:04.613409 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 02:17:04.613447 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 02:17:07.994215 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-05 02:17:07.994293 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:17:07.994301 | orchestrator | 2026-01-05 02:17:07.994306 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-01-05 02:17:07.994313 | orchestrator | Monday 05 January 2026 02:17:04 +0000 (0:00:01.918) 0:00:17.291 ******** 2026-01-05 02:17:07.994331 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-05 02:17:07.994337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 02:17:07.994342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 02:17:07.994347 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 02:17:07.994351 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 02:17:07.994380 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 02:17:07.994385 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 02:17:07.994389 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 02:17:07.994396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:17:07.994400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:17:07.994404 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 02:17:07.994409 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:17:07.994417 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 02:17:07.994425 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 02:17:10.932131 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 02:17:10.932253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:17:10.932265 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:17:10.932274 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-05 02:17:10.932283 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-05 02:17:10.932308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:17:10.932327 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-05 02:17:10.932334 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-05 02:17:10.932344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 02:17:10.932351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 02:17:10.932358 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:17:10.932364 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 02:17:10.932376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:17:10.932383 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:17:10.932405 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:17:15.014426 | orchestrator | 2026-01-05 02:17:15.014521 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-01-05 02:17:15.014533 | orchestrator | Monday 05 January 2026 02:17:10 +0000 (0:00:06.323) 0:00:23.614 ******** 2026-01-05 02:17:15.014539 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-05 02:17:15.014545 | orchestrator | 2026-01-05 02:17:15.014551 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-01-05 02:17:15.014556 | orchestrator | Monday 05 January 2026 02:17:11 +0000 (0:00:00.907) 0:00:24.522 ******** 2026-01-05 02:17:15.014576 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1328633, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2683861, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:15.014585 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1328633, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2683861, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:15.014590 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1328633, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2683861, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 02:17:15.014618 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1328678, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.284406, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:15.014627 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1328633, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2683861, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:15.014635 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1328678, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.284406, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:15.014661 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1328633, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2683861, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:15.014673 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1328633, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2683861, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:15.014682 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1328678, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.284406, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:15.014692 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1328627, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2678294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:15.014706 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1328633, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2683861, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:15.014714 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1328627, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2678294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:15.014723 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1328678, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.284406, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:15.014738 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1328678, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.284406, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:16.953753 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1328627, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2678294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:16.953832 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1328678, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.284406, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:16.953853 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1328659, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2803864, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:16.953859 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1328659, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2803864, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:16.953863 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1328627, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2678294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:16.953867 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1328659, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2803864, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:16.953871 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1328623, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2655475, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:16.953891 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1328627, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2678294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:16.953895 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1328627, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2678294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:16.953903 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1328678, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.284406, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 02:17:16.953907 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1328659, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2803864, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:16.953911 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1328623, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2655475, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:16.953915 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1328623, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2655475, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:16.953919 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1328635, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.269632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:16.953929 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1328623, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2655475, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:18.936853 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1328659, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2803864, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:18.936998 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1328659, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2803864, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:18.937020 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1328635, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.269632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:18.937131 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1328654, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2755117, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:18.937153 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1328635, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.269632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:18.937166 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1328623, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2655475, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:18.937196 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1328623, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2655475, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:18.937232 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1328654, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2755117, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:18.937259 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1328654, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2755117, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:18.937271 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1328627, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2678294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 02:17:18.937284 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1328635, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.269632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:18.937293 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1328638, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2713861, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:18.937300 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1328635, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.269632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:18.937311 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1328635, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.269632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:18.937330 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1328638, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2713861, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:20.659234 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1328638, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2713861, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:20.659355 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1328654, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2755117, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:20.659375 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1328631, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2683861, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:20.659400 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1328631, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2683861, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:20.659414 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1328654, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2755117, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:20.659447 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1328654, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2755117, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:20.659486 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1328638, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2713861, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:20.659521 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1328631, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2683861, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:20.659537 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1328638, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2713861, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:20.659551 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328675, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2833865, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:20.659564 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328675, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2833865, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:20.659578 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328675, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2833865, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:20.659599 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1328659, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2803864, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 02:17:20.659624 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1328638, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2713861, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:20.659649 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1328631, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2683861, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:22.329646 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328616, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2647817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:22.329741 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1328631, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2683861, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:22.329750 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328616, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2647817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:22.329755 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1328703, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2976553, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:22.329799 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328616, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2647817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:22.329808 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1328631, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2683861, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:22.329814 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328675, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2833865, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:22.329835 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328675, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2833865, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:22.329842 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1328669, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2823863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:22.329849 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1328703, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2976553, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:22.329855 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1328703, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2976553, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:22.329879 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328675, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2833865, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:22.329886 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328616, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2647817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:22.329893 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328616, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2647817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:22.329905 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1328623, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2655475, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 02:17:23.925593 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328625, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2655475, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:23.925693 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1328669, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2823863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:23.925706 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1328669, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2823863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:23.925760 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328616, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2647817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:23.925769 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1328703, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2976553, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:23.925775 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1328618, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2651033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:23.925782 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1328703, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2976553, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:23.925802 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1328703, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2976553, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:23.925810 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328625, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2655475, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:23.925821 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328625, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2655475, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:23.925831 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1328635, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.269632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 02:17:23.925838 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1328669, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2823863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:23.925844 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1328669, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2823863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:23.925851 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1328669, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2823863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:23.925862 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1328651, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2747087, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:25.120119 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1328618, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2651033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:25.120381 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1328618, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2651033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:25.120436 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328625, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2655475, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:25.120459 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1328651, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2747087, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:25.120480 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328625, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2655475, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:25.120500 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328625, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2655475, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:25.120519 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1328643, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2723863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:25.120567 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1328643, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2723863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:25.120601 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1328651, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2747087, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:25.120622 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1328618, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2651033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:25.120648 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1328618, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2651033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:25.120668 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1328618, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2651033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:25.120687 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1328698, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2943864, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:25.120707 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1328651, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2747087, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:25.120739 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1328698, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2943864, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:30.614499 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:17:30.614604 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:17:30.614616 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1328651, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2747087, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:30.614641 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1328654, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2755117, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 02:17:30.614649 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1328651, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2747087, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:30.614657 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1328643, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2723863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:30.614665 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1328643, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2723863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:30.614672 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1328643, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2723863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:30.614716 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1328698, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2943864, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:30.614725 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:17:30.614731 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1328698, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2943864, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:30.614738 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:17:30.614749 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1328643, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2723863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:30.614755 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1328698, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2943864, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:30.614761 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:17:30.614767 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1328698, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2943864, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-05 02:17:30.614774 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:17:30.614780 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1328638, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2713861, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 02:17:30.614794 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1328631, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2683861, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 02:17:30.614805 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328675, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2833865, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 02:17:58.213161 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328616, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2647817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 02:17:58.213272 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1328703, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2976553, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 02:17:58.213296 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1328669, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2823863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 02:17:58.213312 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328625, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2655475, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 02:17:58.213329 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1328618, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2651033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 02:17:58.213369 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1328651, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2747087, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 02:17:58.213380 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1328643, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2723863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 02:17:58.213406 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1328698, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2943864, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-05 02:17:58.213416 | orchestrator | 2026-01-05 02:17:58.213426 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-01-05 02:17:58.213437 | orchestrator | Monday 05 January 2026 02:17:38 +0000 (0:00:26.191) 0:00:50.714 ******** 2026-01-05 02:17:58.213446 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-05 02:17:58.213456 | orchestrator | 2026-01-05 02:17:58.213465 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-01-05 02:17:58.213474 | orchestrator | Monday 05 January 2026 02:17:38 +0000 (0:00:00.764) 0:00:51.478 ******** 2026-01-05 02:17:58.213483 | orchestrator | [WARNING]: Skipped 2026-01-05 02:17:58.213493 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-05 02:17:58.213508 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-01-05 02:17:58.213517 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-05 02:17:58.213526 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-01-05 02:17:58.213535 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-05 02:17:58.213544 | orchestrator | [WARNING]: Skipped 2026-01-05 02:17:58.213552 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-05 02:17:58.213561 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-01-05 02:17:58.213569 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-05 02:17:58.213578 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-01-05 02:17:58.213587 | orchestrator | [WARNING]: Skipped 2026-01-05 02:17:58.213595 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-05 02:17:58.213604 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-01-05 02:17:58.213613 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-05 02:17:58.213622 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-01-05 02:17:58.213630 | orchestrator | [WARNING]: Skipped 2026-01-05 02:17:58.213646 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-05 02:17:58.213655 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-01-05 02:17:58.213664 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-05 02:17:58.213672 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-01-05 02:17:58.213681 | orchestrator | [WARNING]: Skipped 2026-01-05 02:17:58.213690 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-05 02:17:58.213699 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-01-05 02:17:58.213708 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-05 02:17:58.213716 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-01-05 02:17:58.213725 | orchestrator | [WARNING]: Skipped 2026-01-05 02:17:58.213734 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-05 02:17:58.213742 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-01-05 02:17:58.213751 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-05 02:17:58.213760 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-01-05 02:17:58.213768 | orchestrator | [WARNING]: Skipped 2026-01-05 02:17:58.213777 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-05 02:17:58.213786 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-01-05 02:17:58.213795 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-05 02:17:58.213803 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-01-05 02:17:58.213812 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 02:17:58.213820 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-05 02:17:58.213829 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-05 02:17:58.213838 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-05 02:17:58.213846 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-05 02:17:58.213855 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-05 02:17:58.213864 | orchestrator | 2026-01-05 02:17:58.213872 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-01-05 02:17:58.213881 | orchestrator | Monday 05 January 2026 02:17:40 +0000 (0:00:01.853) 0:00:53.331 ******** 2026-01-05 02:17:58.213890 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-05 02:17:58.213900 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:17:58.213910 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-05 02:17:58.213925 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:17:58.213948 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-05 02:17:58.213964 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:17:58.213987 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-05 02:18:15.900897 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:18:15.901038 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-05 02:18:15.901055 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:18:15.901063 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-05 02:18:15.901069 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:18:15.901076 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-01-05 02:18:15.901082 | orchestrator | 2026-01-05 02:18:15.901090 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-01-05 02:18:15.901097 | orchestrator | Monday 05 January 2026 02:17:58 +0000 (0:00:17.562) 0:01:10.893 ******** 2026-01-05 02:18:15.901127 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-05 02:18:15.901135 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:18:15.901143 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-05 02:18:15.901149 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:18:15.901155 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-05 02:18:15.901174 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-05 02:18:15.901180 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:18:15.901186 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:18:15.901192 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-05 02:18:15.901198 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:18:15.901203 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-05 02:18:15.901208 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:18:15.901211 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-01-05 02:18:15.901215 | orchestrator | 2026-01-05 02:18:15.901219 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-01-05 02:18:15.901223 | orchestrator | Monday 05 January 2026 02:18:01 +0000 (0:00:03.116) 0:01:14.010 ******** 2026-01-05 02:18:15.901227 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-05 02:18:15.901234 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-05 02:18:15.901237 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:18:15.901241 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:18:15.901245 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-05 02:18:15.901249 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:18:15.901253 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-05 02:18:15.901256 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:18:15.901260 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-01-05 02:18:15.901264 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-05 02:18:15.901268 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:18:15.901272 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-05 02:18:15.901275 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:18:15.901279 | orchestrator | 2026-01-05 02:18:15.901283 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-01-05 02:18:15.901287 | orchestrator | Monday 05 January 2026 02:18:02 +0000 (0:00:01.589) 0:01:15.600 ******** 2026-01-05 02:18:15.901290 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-05 02:18:15.901294 | orchestrator | 2026-01-05 02:18:15.901298 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-01-05 02:18:15.901303 | orchestrator | Monday 05 January 2026 02:18:03 +0000 (0:00:00.748) 0:01:16.348 ******** 2026-01-05 02:18:15.901307 | orchestrator | skipping: [testbed-manager] 2026-01-05 02:18:15.901310 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:18:15.901314 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:18:15.901318 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:18:15.901322 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:18:15.901331 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:18:15.901335 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:18:15.901339 | orchestrator | 2026-01-05 02:18:15.901342 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-01-05 02:18:15.901346 | orchestrator | Monday 05 January 2026 02:18:04 +0000 (0:00:00.786) 0:01:17.135 ******** 2026-01-05 02:18:15.901351 | orchestrator | skipping: [testbed-manager] 2026-01-05 02:18:15.901354 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:18:15.901358 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:18:15.901362 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:18:15.901366 | orchestrator | changed: [testbed-node-1] 2026-01-05 02:18:15.901370 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:18:15.901374 | orchestrator | changed: [testbed-node-2] 2026-01-05 02:18:15.901377 | orchestrator | 2026-01-05 02:18:15.901381 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-01-05 02:18:15.901398 | orchestrator | Monday 05 January 2026 02:18:06 +0000 (0:00:02.533) 0:01:19.668 ******** 2026-01-05 02:18:15.901402 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-05 02:18:15.901406 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-05 02:18:15.901410 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-05 02:18:15.901414 | orchestrator | skipping: [testbed-manager] 2026-01-05 02:18:15.901418 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:18:15.901422 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:18:15.901425 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-05 02:18:15.901429 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:18:15.901434 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-05 02:18:15.901438 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:18:15.901443 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-05 02:18:15.901447 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:18:15.901452 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-05 02:18:15.901460 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:18:15.901464 | orchestrator | 2026-01-05 02:18:15.901469 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-01-05 02:18:15.901473 | orchestrator | Monday 05 January 2026 02:18:08 +0000 (0:00:01.417) 0:01:21.086 ******** 2026-01-05 02:18:15.901478 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-05 02:18:15.901483 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:18:15.901487 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-05 02:18:15.901492 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:18:15.901496 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-05 02:18:15.901501 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:18:15.901505 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-05 02:18:15.901510 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:18:15.901514 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-05 02:18:15.901519 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:18:15.901523 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-05 02:18:15.901528 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:18:15.901532 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-01-05 02:18:15.901541 | orchestrator | 2026-01-05 02:18:15.901545 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-01-05 02:18:15.901550 | orchestrator | Monday 05 January 2026 02:18:09 +0000 (0:00:01.406) 0:01:22.493 ******** 2026-01-05 02:18:15.901555 | orchestrator | [WARNING]: Skipped 2026-01-05 02:18:15.901561 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-01-05 02:18:15.901565 | orchestrator | due to this access issue: 2026-01-05 02:18:15.901570 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-01-05 02:18:15.901575 | orchestrator | not a directory 2026-01-05 02:18:15.901579 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-05 02:18:15.901584 | orchestrator | 2026-01-05 02:18:15.901588 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-01-05 02:18:15.901593 | orchestrator | Monday 05 January 2026 02:18:11 +0000 (0:00:01.597) 0:01:24.091 ******** 2026-01-05 02:18:15.901597 | orchestrator | skipping: [testbed-manager] 2026-01-05 02:18:15.901602 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:18:15.901606 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:18:15.901611 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:18:15.901615 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:18:15.901620 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:18:15.901624 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:18:15.901629 | orchestrator | 2026-01-05 02:18:15.901633 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-01-05 02:18:15.901637 | orchestrator | Monday 05 January 2026 02:18:12 +0000 (0:00:00.775) 0:01:24.866 ******** 2026-01-05 02:18:15.901640 | orchestrator | skipping: [testbed-manager] 2026-01-05 02:18:15.901644 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:18:15.901648 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:18:15.901652 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:18:15.901656 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:18:15.901660 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:18:15.901664 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:18:15.901667 | orchestrator | 2026-01-05 02:18:15.901671 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-01-05 02:18:15.901675 | orchestrator | Monday 05 January 2026 02:18:13 +0000 (0:00:00.962) 0:01:25.828 ******** 2026-01-05 02:18:15.901686 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-05 02:18:17.805878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 02:18:17.805963 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 02:18:17.805993 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 02:18:17.806001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 02:18:17.806062 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 02:18:17.806070 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 02:18:17.806077 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:18:17.806096 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:18:17.806108 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 02:18:17.806122 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-05 02:18:17.806128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:18:17.806135 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 02:18:17.806142 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 02:18:17.806150 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:18:17.806158 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:18:17.806170 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-05 02:18:19.488865 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 02:18:19.489055 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-05 02:18:19.489078 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:18:19.489091 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-05 02:18:19.489102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 02:18:19.489113 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-05 02:18:19.489145 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 02:18:19.489168 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:18:19.489175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-05 02:18:19.489182 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:18:19.489189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:18:19.489196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:18:19.489203 | orchestrator | 2026-01-05 02:18:19.489211 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-01-05 02:18:19.489218 | orchestrator | Monday 05 January 2026 02:18:17 +0000 (0:00:04.668) 0:01:30.496 ******** 2026-01-05 02:18:19.489225 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-05 02:18:19.489232 | orchestrator | skipping: [testbed-manager] 2026-01-05 02:18:19.489239 | orchestrator | 2026-01-05 02:18:19.489245 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-05 02:18:19.489251 | orchestrator | Monday 05 January 2026 02:18:18 +0000 (0:00:01.100) 0:01:31.597 ******** 2026-01-05 02:18:19.489258 | orchestrator | 2026-01-05 02:18:19.489264 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-05 02:18:19.489270 | orchestrator | Monday 05 January 2026 02:18:18 +0000 (0:00:00.078) 0:01:31.676 ******** 2026-01-05 02:18:19.489276 | orchestrator | 2026-01-05 02:18:19.489284 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-05 02:18:19.489297 | orchestrator | Monday 05 January 2026 02:18:19 +0000 (0:00:00.079) 0:01:31.755 ******** 2026-01-05 02:18:19.489304 | orchestrator | 2026-01-05 02:18:19.489312 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-05 02:18:19.489319 | orchestrator | Monday 05 January 2026 02:18:19 +0000 (0:00:00.078) 0:01:31.834 ******** 2026-01-05 02:18:19.489327 | orchestrator | 2026-01-05 02:18:19.489334 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-05 02:18:19.489341 | orchestrator | Monday 05 January 2026 02:18:19 +0000 (0:00:00.076) 0:01:31.911 ******** 2026-01-05 02:18:19.489348 | orchestrator | 2026-01-05 02:18:19.489356 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-05 02:18:19.489363 | orchestrator | Monday 05 January 2026 02:18:19 +0000 (0:00:00.077) 0:01:31.988 ******** 2026-01-05 02:18:19.489370 | orchestrator | 2026-01-05 02:18:19.489383 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-05 02:19:55.442209 | orchestrator | Monday 05 January 2026 02:18:19 +0000 (0:00:00.069) 0:01:32.057 ******** 2026-01-05 02:19:55.442320 | orchestrator | 2026-01-05 02:19:55.442327 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-01-05 02:19:55.442349 | orchestrator | Monday 05 January 2026 02:18:19 +0000 (0:00:00.102) 0:01:32.159 ******** 2026-01-05 02:19:55.442353 | orchestrator | changed: [testbed-manager] 2026-01-05 02:19:55.442359 | orchestrator | 2026-01-05 02:19:55.442363 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-01-05 02:19:55.442367 | orchestrator | Monday 05 January 2026 02:18:41 +0000 (0:00:21.981) 0:01:54.141 ******** 2026-01-05 02:19:55.442371 | orchestrator | changed: [testbed-node-1] 2026-01-05 02:19:55.442375 | orchestrator | changed: [testbed-node-3] 2026-01-05 02:19:55.442379 | orchestrator | changed: [testbed-node-5] 2026-01-05 02:19:55.442383 | orchestrator | changed: [testbed-node-2] 2026-01-05 02:19:55.442386 | orchestrator | changed: [testbed-manager] 2026-01-05 02:19:55.442390 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:19:55.442394 | orchestrator | changed: [testbed-node-4] 2026-01-05 02:19:55.442398 | orchestrator | 2026-01-05 02:19:55.442402 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-01-05 02:19:55.442405 | orchestrator | Monday 05 January 2026 02:18:55 +0000 (0:00:13.607) 0:02:07.749 ******** 2026-01-05 02:19:55.442409 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:19:55.442413 | orchestrator | changed: [testbed-node-2] 2026-01-05 02:19:55.442417 | orchestrator | changed: [testbed-node-1] 2026-01-05 02:19:55.442420 | orchestrator | 2026-01-05 02:19:55.442424 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-01-05 02:19:55.442429 | orchestrator | Monday 05 January 2026 02:19:05 +0000 (0:00:10.232) 0:02:17.981 ******** 2026-01-05 02:19:55.442433 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:19:55.442437 | orchestrator | changed: [testbed-node-1] 2026-01-05 02:19:55.442440 | orchestrator | changed: [testbed-node-2] 2026-01-05 02:19:55.442444 | orchestrator | 2026-01-05 02:19:55.442449 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-01-05 02:19:55.442454 | orchestrator | Monday 05 January 2026 02:19:15 +0000 (0:00:10.486) 0:02:28.468 ******** 2026-01-05 02:19:55.442460 | orchestrator | changed: [testbed-node-1] 2026-01-05 02:19:55.442466 | orchestrator | changed: [testbed-node-3] 2026-01-05 02:19:55.442472 | orchestrator | changed: [testbed-manager] 2026-01-05 02:19:55.442478 | orchestrator | changed: [testbed-node-5] 2026-01-05 02:19:55.442484 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:19:55.442491 | orchestrator | changed: [testbed-node-2] 2026-01-05 02:19:55.442497 | orchestrator | changed: [testbed-node-4] 2026-01-05 02:19:55.442503 | orchestrator | 2026-01-05 02:19:55.442508 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-01-05 02:19:55.442514 | orchestrator | Monday 05 January 2026 02:19:25 +0000 (0:00:09.475) 0:02:37.944 ******** 2026-01-05 02:19:55.442546 | orchestrator | changed: [testbed-manager] 2026-01-05 02:19:55.442553 | orchestrator | 2026-01-05 02:19:55.442560 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-01-05 02:19:55.442565 | orchestrator | Monday 05 January 2026 02:19:33 +0000 (0:00:08.542) 0:02:46.486 ******** 2026-01-05 02:19:55.442570 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:19:55.442576 | orchestrator | changed: [testbed-node-2] 2026-01-05 02:19:55.442591 | orchestrator | changed: [testbed-node-1] 2026-01-05 02:19:55.442597 | orchestrator | 2026-01-05 02:19:55.442603 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-01-05 02:19:55.442616 | orchestrator | Monday 05 January 2026 02:19:44 +0000 (0:00:10.224) 0:02:56.710 ******** 2026-01-05 02:19:55.442621 | orchestrator | changed: [testbed-manager] 2026-01-05 02:19:55.442627 | orchestrator | 2026-01-05 02:19:55.442632 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-01-05 02:19:55.442638 | orchestrator | Monday 05 January 2026 02:19:49 +0000 (0:00:05.334) 0:03:02.045 ******** 2026-01-05 02:19:55.442644 | orchestrator | changed: [testbed-node-4] 2026-01-05 02:19:55.442649 | orchestrator | changed: [testbed-node-3] 2026-01-05 02:19:55.442655 | orchestrator | changed: [testbed-node-5] 2026-01-05 02:19:55.442661 | orchestrator | 2026-01-05 02:19:55.442667 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 02:19:55.442674 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-01-05 02:19:55.442682 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-05 02:19:55.442688 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-05 02:19:55.442693 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-05 02:19:55.442699 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-05 02:19:55.442704 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-05 02:19:55.442710 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-05 02:19:55.442716 | orchestrator | 2026-01-05 02:19:55.442722 | orchestrator | 2026-01-05 02:19:55.442728 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 02:19:55.442734 | orchestrator | Monday 05 January 2026 02:19:54 +0000 (0:00:05.570) 0:03:07.615 ******** 2026-01-05 02:19:55.442740 | orchestrator | =============================================================================== 2026-01-05 02:19:55.442760 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 26.19s 2026-01-05 02:19:55.442765 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 21.98s 2026-01-05 02:19:55.442775 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 17.56s 2026-01-05 02:19:55.442779 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 13.61s 2026-01-05 02:19:55.442784 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 10.49s 2026-01-05 02:19:55.442788 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.23s 2026-01-05 02:19:55.442792 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 10.22s 2026-01-05 02:19:55.442797 | orchestrator | prometheus : Restart prometheus-cadvisor container ---------------------- 9.48s 2026-01-05 02:19:55.442801 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 8.54s 2026-01-05 02:19:55.442812 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.32s 2026-01-05 02:19:55.442817 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.76s 2026-01-05 02:19:55.442821 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 5.57s 2026-01-05 02:19:55.442825 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.33s 2026-01-05 02:19:55.442830 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.67s 2026-01-05 02:19:55.442834 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.38s 2026-01-05 02:19:55.442839 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.12s 2026-01-05 02:19:55.442843 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.53s 2026-01-05 02:19:55.442848 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 1.92s 2026-01-05 02:19:55.442852 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 1.85s 2026-01-05 02:19:55.442857 | orchestrator | prometheus : Find extra prometheus server config files ------------------ 1.60s 2026-01-05 02:19:58.433076 | orchestrator | 2026-01-05 02:19:58 | INFO  | Task cadfd091-c2f0-41e5-9823-838971fd566d (grafana) was prepared for execution. 2026-01-05 02:19:58.433164 | orchestrator | 2026-01-05 02:19:58 | INFO  | It takes a moment until task cadfd091-c2f0-41e5-9823-838971fd566d (grafana) has been started and output is visible here. 2026-01-05 02:20:08.431854 | orchestrator | 2026-01-05 02:20:08.431957 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 02:20:08.432033 | orchestrator | 2026-01-05 02:20:08.432040 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 02:20:08.432047 | orchestrator | Monday 05 January 2026 02:20:02 +0000 (0:00:00.267) 0:00:00.267 ******** 2026-01-05 02:20:08.432053 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:20:08.432060 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:20:08.432065 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:20:08.432071 | orchestrator | 2026-01-05 02:20:08.432076 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 02:20:08.432082 | orchestrator | Monday 05 January 2026 02:20:03 +0000 (0:00:00.322) 0:00:00.589 ******** 2026-01-05 02:20:08.432088 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-01-05 02:20:08.432095 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-01-05 02:20:08.432100 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-01-05 02:20:08.432106 | orchestrator | 2026-01-05 02:20:08.432112 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-01-05 02:20:08.432117 | orchestrator | 2026-01-05 02:20:08.432123 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-01-05 02:20:08.432129 | orchestrator | Monday 05 January 2026 02:20:03 +0000 (0:00:00.506) 0:00:01.096 ******** 2026-01-05 02:20:08.432135 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 02:20:08.432143 | orchestrator | 2026-01-05 02:20:08.432149 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-01-05 02:20:08.432155 | orchestrator | Monday 05 January 2026 02:20:04 +0000 (0:00:00.582) 0:00:01.678 ******** 2026-01-05 02:20:08.432164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-05 02:20:08.432205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-05 02:20:08.432212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-05 02:20:08.432218 | orchestrator | 2026-01-05 02:20:08.432223 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-01-05 02:20:08.432229 | orchestrator | Monday 05 January 2026 02:20:05 +0000 (0:00:00.935) 0:00:02.613 ******** 2026-01-05 02:20:08.432235 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-01-05 02:20:08.432241 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-01-05 02:20:08.432246 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 02:20:08.432252 | orchestrator | 2026-01-05 02:20:08.432258 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-01-05 02:20:08.432264 | orchestrator | Monday 05 January 2026 02:20:05 +0000 (0:00:00.864) 0:00:03.477 ******** 2026-01-05 02:20:08.432269 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 02:20:08.432275 | orchestrator | 2026-01-05 02:20:08.432280 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-01-05 02:20:08.432286 | orchestrator | Monday 05 January 2026 02:20:06 +0000 (0:00:00.566) 0:00:04.044 ******** 2026-01-05 02:20:08.432308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-05 02:20:08.432314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-05 02:20:08.432326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-05 02:20:08.432332 | orchestrator | 2026-01-05 02:20:08.432339 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-01-05 02:20:08.432344 | orchestrator | Monday 05 January 2026 02:20:07 +0000 (0:00:01.329) 0:00:05.373 ******** 2026-01-05 02:20:08.432354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-05 02:20:08.432360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-05 02:20:08.432366 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:20:08.432372 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:20:08.432385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-05 02:20:15.539355 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:20:15.539446 | orchestrator | 2026-01-05 02:20:15.539458 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-01-05 02:20:15.539467 | orchestrator | Monday 05 January 2026 02:20:08 +0000 (0:00:00.606) 0:00:05.980 ******** 2026-01-05 02:20:15.539476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-05 02:20:15.539501 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:20:15.539508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-05 02:20:15.539514 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:20:15.539534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-05 02:20:15.539541 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:20:15.539545 | orchestrator | 2026-01-05 02:20:15.539549 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-01-05 02:20:15.539553 | orchestrator | Monday 05 January 2026 02:20:09 +0000 (0:00:00.657) 0:00:06.637 ******** 2026-01-05 02:20:15.539557 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-05 02:20:15.539562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-05 02:20:15.539582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-05 02:20:15.539593 | orchestrator | 2026-01-05 02:20:15.539596 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-01-05 02:20:15.539600 | orchestrator | Monday 05 January 2026 02:20:10 +0000 (0:00:01.341) 0:00:07.978 ******** 2026-01-05 02:20:15.539604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-05 02:20:15.539608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-05 02:20:15.539616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-05 02:20:15.539620 | orchestrator | 2026-01-05 02:20:15.539624 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-01-05 02:20:15.539628 | orchestrator | Monday 05 January 2026 02:20:12 +0000 (0:00:01.659) 0:00:09.638 ******** 2026-01-05 02:20:15.539631 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:20:15.539635 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:20:15.539639 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:20:15.539643 | orchestrator | 2026-01-05 02:20:15.539647 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-01-05 02:20:15.539650 | orchestrator | Monday 05 January 2026 02:20:12 +0000 (0:00:00.314) 0:00:09.952 ******** 2026-01-05 02:20:15.539654 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-05 02:20:15.539660 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-05 02:20:15.539664 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-05 02:20:15.539667 | orchestrator | 2026-01-05 02:20:15.539671 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-01-05 02:20:15.539675 | orchestrator | Monday 05 January 2026 02:20:13 +0000 (0:00:01.327) 0:00:11.279 ******** 2026-01-05 02:20:15.539681 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-05 02:20:15.539688 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-05 02:20:15.539699 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-05 02:20:15.539705 | orchestrator | 2026-01-05 02:20:15.539712 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-01-05 02:20:15.539722 | orchestrator | Monday 05 January 2026 02:20:15 +0000 (0:00:01.805) 0:00:13.085 ******** 2026-01-05 02:20:22.220456 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 02:20:22.220542 | orchestrator | 2026-01-05 02:20:22.220552 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-01-05 02:20:22.220559 | orchestrator | Monday 05 January 2026 02:20:16 +0000 (0:00:00.758) 0:00:13.843 ******** 2026-01-05 02:20:22.220565 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-01-05 02:20:22.220571 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-01-05 02:20:22.220577 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:20:22.220584 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:20:22.220589 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:20:22.220594 | orchestrator | 2026-01-05 02:20:22.220600 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-01-05 02:20:22.220605 | orchestrator | Monday 05 January 2026 02:20:17 +0000 (0:00:00.778) 0:00:14.621 ******** 2026-01-05 02:20:22.220611 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:20:22.220616 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:20:22.220621 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:20:22.220626 | orchestrator | 2026-01-05 02:20:22.220631 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-01-05 02:20:22.220637 | orchestrator | Monday 05 January 2026 02:20:17 +0000 (0:00:00.342) 0:00:14.963 ******** 2026-01-05 02:20:22.220645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1327525, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572305.9945195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:22.220668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1327525, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572305.9945195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:22.220674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1327525, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572305.9945195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:22.220680 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1328390, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2045777, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:22.220714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1328390, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2045777, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:22.220720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1328390, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2045777, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:22.220726 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1327544, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572305.9963837, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:22.220732 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1327544, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572305.9963837, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:22.220740 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1327544, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572305.9963837, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:22.220746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1328391, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2083857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:22.220756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1328391, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2083857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:22.220767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1328391, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2083857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:26.196550 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1327561, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.0073838, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:26.196650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1327561, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.0073838, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:26.196682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1327561, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.0073838, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:26.196695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1327592, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.0146859, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:26.196729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1327592, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.0146859, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:26.196736 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1327592, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.0146859, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:26.196765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1327523, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572305.9925964, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:26.196776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1327523, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572305.9925964, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:26.196784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1327523, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572305.9925964, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:26.196798 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1327538, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572305.9949472, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:26.196812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1327538, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572305.9949472, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:26.196820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1327538, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572305.9949472, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:26.196835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1327545, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572305.9973838, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:30.363320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1327545, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572305.9973838, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:30.363451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1327545, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572305.9973838, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:30.363487 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1327574, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.011191, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:30.363524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1327574, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.011191, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:30.363537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1327574, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.011191, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:30.363548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1328338, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2045777, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:30.363584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1328338, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2045777, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:30.363602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1328338, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2045777, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:30.363625 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1327540, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572305.9962566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:30.363657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1327540, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572305.9962566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:30.363676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1327540, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572305.9962566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:30.363694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1327589, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.012665, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:30.363722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1327589, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.012665, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:34.569410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1327589, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.012665, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:34.569492 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1327566, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.0103097, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:34.569530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1327566, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.0103097, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:34.569536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1327566, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.0103097, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:34.569541 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1327550, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.006384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:34.569548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1327550, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.006384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:34.569564 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1327550, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.006384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:34.569569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1327548, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572305.9995954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:34.569581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1327548, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572305.9995954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:34.569586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1327548, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572305.9995954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:34.569590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1327582, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.0121279, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:34.569595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1327582, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.0121279, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:34.569604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1327582, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.0121279, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:38.848202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1327546, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572305.998422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:38.848319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1327546, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572305.998422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:38.848349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1327546, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572305.998422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:38.848358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1327602, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.1616127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:38.848368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1327602, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.1616127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:38.848376 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1327602, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.1616127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:38.848398 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1328601, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2614932, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:38.848406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1328601, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2614932, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:38.848424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1328601, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2614932, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:38.848432 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1328435, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2265031, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:38.848440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1328435, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2265031, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:38.848447 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1328435, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2265031, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:38.848461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1328419, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2123857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:42.843027 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1328419, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2123857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:42.843186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1328419, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2123857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:42.843203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1328463, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2295878, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:42.843215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1328463, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2295878, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:42.843225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1328463, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2295878, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:42.843236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1328404, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2097535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:42.843265 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1328404, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2097535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:42.843288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1328404, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2097535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:42.843299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1328515, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2443092, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:42.843309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1328515, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2443092, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:42.843319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1328515, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2443092, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:42.843329 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1328465, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2396185, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:42.843354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1328465, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2396185, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:47.435722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1328465, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2396185, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:47.435820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1328523, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.244851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:47.435841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1328523, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.244851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:47.435853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1328523, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.244851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:47.435861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1328595, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.259906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:47.435888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1328595, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.259906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:47.435911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1328595, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.259906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:47.435924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1328506, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2428596, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:47.435931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1328506, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2428596, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:47.435937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1328506, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2428596, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:47.435943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1328452, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2282493, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:47.435978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1328452, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2282493, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:47.435992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1328452, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2282493, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:51.482291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1328429, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2187586, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:51.482365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1328429, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2187586, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:51.482370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1328429, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2187586, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:51.482375 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1328446, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2272005, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:51.482404 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1328446, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2272005, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:51.482413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1328446, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2272005, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:51.482440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1328422, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2167869, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:51.482448 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1328422, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2167869, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:51.482454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1328422, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2167869, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:51.482461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1328455, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2284577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:51.482475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1328455, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2284577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:51.482482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1328455, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2284577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:51.482500 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1328559, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2583861, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:55.306210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1328559, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2583861, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:55.306308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1328559, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2583861, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:55.306316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1328546, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2508183, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:55.306344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1328546, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2508183, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:55.306351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1328546, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2508183, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:55.306371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1328408, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2109354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:55.306394 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1328408, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2109354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:55.306400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1328408, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2109354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:55.306408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1328414, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2121875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:55.306419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1328414, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2121875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:55.306426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1328414, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2121875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:55.306433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1328502, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.241386, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:20:55.306453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1328502, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.241386, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:22:33.945998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1328502, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.241386, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:22:33.946134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1328528, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2453861, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:22:33.946161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1328528, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2453861, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:22:33.946166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1328528, 'dev': 135, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1767572306.2453861, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-05 02:22:33.946171 | orchestrator | 2026-01-05 02:22:33.946176 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-01-05 02:22:33.946182 | orchestrator | Monday 05 January 2026 02:20:56 +0000 (0:00:39.148) 0:00:54.112 ******** 2026-01-05 02:22:33.946186 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-05 02:22:33.946213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-05 02:22:33.946218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-05 02:22:33.946222 | orchestrator | 2026-01-05 02:22:33.946230 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-01-05 02:22:33.946234 | orchestrator | Monday 05 January 2026 02:20:57 +0000 (0:00:01.079) 0:00:55.192 ******** 2026-01-05 02:22:33.946238 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:22:33.946243 | orchestrator | 2026-01-05 02:22:33.946247 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-01-05 02:22:33.946250 | orchestrator | Monday 05 January 2026 02:21:00 +0000 (0:00:02.529) 0:00:57.722 ******** 2026-01-05 02:22:33.946254 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:22:33.946258 | orchestrator | 2026-01-05 02:22:33.946261 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-05 02:22:33.946265 | orchestrator | Monday 05 January 2026 02:21:02 +0000 (0:00:02.464) 0:01:00.186 ******** 2026-01-05 02:22:33.946269 | orchestrator | 2026-01-05 02:22:33.946273 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-05 02:22:33.946276 | orchestrator | Monday 05 January 2026 02:21:02 +0000 (0:00:00.073) 0:01:00.260 ******** 2026-01-05 02:22:33.946280 | orchestrator | 2026-01-05 02:22:33.946284 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-05 02:22:33.946287 | orchestrator | Monday 05 January 2026 02:21:02 +0000 (0:00:00.082) 0:01:00.342 ******** 2026-01-05 02:22:33.946291 | orchestrator | 2026-01-05 02:22:33.946295 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-01-05 02:22:33.946299 | orchestrator | Monday 05 January 2026 02:21:02 +0000 (0:00:00.072) 0:01:00.415 ******** 2026-01-05 02:22:33.946302 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:22:33.946306 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:22:33.946310 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:22:33.946314 | orchestrator | 2026-01-05 02:22:33.946317 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-01-05 02:22:33.946321 | orchestrator | Monday 05 January 2026 02:21:05 +0000 (0:00:02.294) 0:01:02.710 ******** 2026-01-05 02:22:33.946325 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:22:33.946329 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:22:33.946332 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-01-05 02:22:33.946338 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-01-05 02:22:33.946342 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-01-05 02:22:33.946346 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (9 retries left). 2026-01-05 02:22:33.946350 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:22:33.946354 | orchestrator | 2026-01-05 02:22:33.946358 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-01-05 02:22:33.946362 | orchestrator | Monday 05 January 2026 02:21:56 +0000 (0:00:51.701) 0:01:54.411 ******** 2026-01-05 02:22:33.946366 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:22:33.946369 | orchestrator | changed: [testbed-node-2] 2026-01-05 02:22:33.946373 | orchestrator | changed: [testbed-node-1] 2026-01-05 02:22:33.946377 | orchestrator | 2026-01-05 02:22:33.946381 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-01-05 02:22:33.946384 | orchestrator | Monday 05 January 2026 02:22:28 +0000 (0:00:31.501) 0:02:25.912 ******** 2026-01-05 02:22:33.946388 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:22:33.946392 | orchestrator | 2026-01-05 02:22:33.946396 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-01-05 02:22:33.946399 | orchestrator | Monday 05 January 2026 02:22:30 +0000 (0:00:02.379) 0:02:28.292 ******** 2026-01-05 02:22:33.946403 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:22:33.946407 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:22:33.946411 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:22:33.946414 | orchestrator | 2026-01-05 02:22:33.946418 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-01-05 02:22:33.946429 | orchestrator | Monday 05 January 2026 02:22:31 +0000 (0:00:00.355) 0:02:28.648 ******** 2026-01-05 02:22:33.946434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-01-05 02:22:33.946443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-01-05 02:22:34.713349 | orchestrator | 2026-01-05 02:22:34.713428 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-01-05 02:22:34.713435 | orchestrator | Monday 05 January 2026 02:22:33 +0000 (0:00:02.841) 0:02:31.490 ******** 2026-01-05 02:22:34.713440 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:22:34.713445 | orchestrator | 2026-01-05 02:22:34.713450 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 02:22:34.713456 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-05 02:22:34.713462 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-05 02:22:34.713466 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-05 02:22:34.713470 | orchestrator | 2026-01-05 02:22:34.713474 | orchestrator | 2026-01-05 02:22:34.713478 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 02:22:34.713481 | orchestrator | Monday 05 January 2026 02:22:34 +0000 (0:00:00.386) 0:02:31.876 ******** 2026-01-05 02:22:34.713485 | orchestrator | =============================================================================== 2026-01-05 02:22:34.713489 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 51.70s 2026-01-05 02:22:34.713493 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 39.15s 2026-01-05 02:22:34.713497 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 31.50s 2026-01-05 02:22:34.713500 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.84s 2026-01-05 02:22:34.713504 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.53s 2026-01-05 02:22:34.713508 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.46s 2026-01-05 02:22:34.713512 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.38s 2026-01-05 02:22:34.713515 | orchestrator | grafana : Restart first grafana container ------------------------------- 2.29s 2026-01-05 02:22:34.713519 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.81s 2026-01-05 02:22:34.713523 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.66s 2026-01-05 02:22:34.713527 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.34s 2026-01-05 02:22:34.713530 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.33s 2026-01-05 02:22:34.713535 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.33s 2026-01-05 02:22:34.713539 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.08s 2026-01-05 02:22:34.713542 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.94s 2026-01-05 02:22:34.713546 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.86s 2026-01-05 02:22:34.713550 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.78s 2026-01-05 02:22:34.713572 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.76s 2026-01-05 02:22:34.713576 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.66s 2026-01-05 02:22:34.713580 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS certificate --- 0.61s 2026-01-05 02:22:35.034315 | orchestrator | + sh -c /opt/configuration/scripts/deploy/510-clusterapi.sh 2026-01-05 02:22:35.042377 | orchestrator | + set -e 2026-01-05 02:22:35.042472 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-05 02:22:35.042486 | orchestrator | ++ export INTERACTIVE=false 2026-01-05 02:22:35.042497 | orchestrator | ++ INTERACTIVE=false 2026-01-05 02:22:35.042506 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-05 02:22:35.042515 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-05 02:22:35.042525 | orchestrator | + source /opt/manager-vars.sh 2026-01-05 02:22:35.042534 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-05 02:22:35.042543 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-05 02:22:35.042553 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-05 02:22:35.042563 | orchestrator | ++ CEPH_VERSION=reef 2026-01-05 02:22:35.042573 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-05 02:22:35.042582 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-05 02:22:35.042592 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-01-05 02:22:35.042602 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-01-05 02:22:35.042608 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-05 02:22:35.042614 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-05 02:22:35.042620 | orchestrator | ++ export ARA=false 2026-01-05 02:22:35.042625 | orchestrator | ++ ARA=false 2026-01-05 02:22:35.042631 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-05 02:22:35.042637 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-05 02:22:35.042643 | orchestrator | ++ export TEMPEST=false 2026-01-05 02:22:35.042648 | orchestrator | ++ TEMPEST=false 2026-01-05 02:22:35.042653 | orchestrator | ++ export IS_ZUUL=true 2026-01-05 02:22:35.042659 | orchestrator | ++ IS_ZUUL=true 2026-01-05 02:22:35.042664 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.95 2026-01-05 02:22:35.042684 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.95 2026-01-05 02:22:35.042690 | orchestrator | ++ export EXTERNAL_API=false 2026-01-05 02:22:35.042695 | orchestrator | ++ EXTERNAL_API=false 2026-01-05 02:22:35.042700 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-05 02:22:35.042706 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-05 02:22:35.042711 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-05 02:22:35.042717 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-05 02:22:35.042722 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-05 02:22:35.042727 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-05 02:22:35.043437 | orchestrator | ++ semver 9.5.0 8.0.0 2026-01-05 02:22:35.096836 | orchestrator | + [[ 1 -ge 0 ]] 2026-01-05 02:22:35.096989 | orchestrator | + osism apply clusterapi 2026-01-05 02:22:37.200085 | orchestrator | 2026-01-05 02:22:37 | INFO  | Task 381cac65-e7b3-49b7-8e62-34c32d657327 (clusterapi) was prepared for execution. 2026-01-05 02:22:37.200184 | orchestrator | 2026-01-05 02:22:37 | INFO  | It takes a moment until task 381cac65-e7b3-49b7-8e62-34c32d657327 (clusterapi) has been started and output is visible here. 2026-01-05 02:23:36.205645 | orchestrator | 2026-01-05 02:23:36.205748 | orchestrator | PLAY [Apply cert_manager role] ************************************************* 2026-01-05 02:23:36.205762 | orchestrator | 2026-01-05 02:23:36.205770 | orchestrator | TASK [Include cert_manager role] *********************************************** 2026-01-05 02:23:36.205778 | orchestrator | Monday 05 January 2026 02:22:41 +0000 (0:00:00.199) 0:00:00.199 ******** 2026-01-05 02:23:36.205783 | orchestrator | included: cert_manager for testbed-manager 2026-01-05 02:23:36.205787 | orchestrator | 2026-01-05 02:23:36.205791 | orchestrator | TASK [cert_manager : Deploy cert-manager crds] ********************************* 2026-01-05 02:23:36.205796 | orchestrator | Monday 05 January 2026 02:22:41 +0000 (0:00:00.241) 0:00:00.440 ******** 2026-01-05 02:23:36.205800 | orchestrator | changed: [testbed-manager] 2026-01-05 02:23:36.205805 | orchestrator | 2026-01-05 02:23:36.205809 | orchestrator | TASK [cert_manager : Deploy cert-manager] ************************************** 2026-01-05 02:23:36.205812 | orchestrator | Monday 05 January 2026 02:22:47 +0000 (0:00:05.472) 0:00:05.912 ******** 2026-01-05 02:23:36.205816 | orchestrator | changed: [testbed-manager] 2026-01-05 02:23:36.205840 | orchestrator | 2026-01-05 02:23:36.205844 | orchestrator | PLAY [Initialize or upgrade the CAPI management cluster] *********************** 2026-01-05 02:23:36.205848 | orchestrator | 2026-01-05 02:23:36.205852 | orchestrator | TASK [Get capi-system namespace phase] ***************************************** 2026-01-05 02:23:36.205856 | orchestrator | Monday 05 January 2026 02:23:15 +0000 (0:00:28.194) 0:00:34.107 ******** 2026-01-05 02:23:36.205860 | orchestrator | ok: [testbed-manager] 2026-01-05 02:23:36.205864 | orchestrator | 2026-01-05 02:23:36.205868 | orchestrator | TASK [Set capi-system-phase fact] ********************************************** 2026-01-05 02:23:36.205872 | orchestrator | Monday 05 January 2026 02:23:16 +0000 (0:00:01.143) 0:00:35.251 ******** 2026-01-05 02:23:36.205876 | orchestrator | ok: [testbed-manager] 2026-01-05 02:23:36.205880 | orchestrator | 2026-01-05 02:23:36.205926 | orchestrator | TASK [Initialize the CAPI management cluster] ********************************** 2026-01-05 02:23:36.205932 | orchestrator | Monday 05 January 2026 02:23:16 +0000 (0:00:00.144) 0:00:35.396 ******** 2026-01-05 02:23:36.205936 | orchestrator | ok: [testbed-manager] 2026-01-05 02:23:36.205940 | orchestrator | 2026-01-05 02:23:36.205944 | orchestrator | TASK [Upgrade the CAPI management cluster] ************************************* 2026-01-05 02:23:36.205948 | orchestrator | Monday 05 January 2026 02:23:33 +0000 (0:00:16.385) 0:00:51.781 ******** 2026-01-05 02:23:36.205952 | orchestrator | skipping: [testbed-manager] 2026-01-05 02:23:36.205956 | orchestrator | 2026-01-05 02:23:36.205960 | orchestrator | TASK [Install openstack-resource-controller] *********************************** 2026-01-05 02:23:36.205964 | orchestrator | Monday 05 January 2026 02:23:33 +0000 (0:00:00.177) 0:00:51.959 ******** 2026-01-05 02:23:36.205968 | orchestrator | changed: [testbed-manager] 2026-01-05 02:23:36.205972 | orchestrator | 2026-01-05 02:23:36.205976 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 02:23:36.205982 | orchestrator | testbed-manager : ok=7  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-05 02:23:36.205987 | orchestrator | 2026-01-05 02:23:36.205990 | orchestrator | 2026-01-05 02:23:36.205994 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 02:23:36.205998 | orchestrator | Monday 05 January 2026 02:23:35 +0000 (0:00:02.442) 0:00:54.401 ******** 2026-01-05 02:23:36.206002 | orchestrator | =============================================================================== 2026-01-05 02:23:36.206006 | orchestrator | cert_manager : Deploy cert-manager ------------------------------------- 28.19s 2026-01-05 02:23:36.206009 | orchestrator | Initialize the CAPI management cluster --------------------------------- 16.39s 2026-01-05 02:23:36.206052 | orchestrator | cert_manager : Deploy cert-manager crds --------------------------------- 5.47s 2026-01-05 02:23:36.206060 | orchestrator | Install openstack-resource-controller ----------------------------------- 2.44s 2026-01-05 02:23:36.206066 | orchestrator | Get capi-system namespace phase ----------------------------------------- 1.14s 2026-01-05 02:23:36.206073 | orchestrator | Include cert_manager role ----------------------------------------------- 0.24s 2026-01-05 02:23:36.206079 | orchestrator | Upgrade the CAPI management cluster ------------------------------------- 0.18s 2026-01-05 02:23:36.206084 | orchestrator | Set capi-system-phase fact ---------------------------------------------- 0.14s 2026-01-05 02:23:36.518552 | orchestrator | + osism apply magnum 2026-01-05 02:23:38.667765 | orchestrator | 2026-01-05 02:23:38 | INFO  | Task 6f07896d-c153-40a1-8d2a-5ba1f6f19fd3 (magnum) was prepared for execution. 2026-01-05 02:23:38.667843 | orchestrator | 2026-01-05 02:23:38 | INFO  | It takes a moment until task 6f07896d-c153-40a1-8d2a-5ba1f6f19fd3 (magnum) has been started and output is visible here. 2026-01-05 02:24:25.027952 | orchestrator | 2026-01-05 02:24:25.028062 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 02:24:25.028076 | orchestrator | 2026-01-05 02:24:25.028083 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 02:24:25.028104 | orchestrator | Monday 05 January 2026 02:23:42 +0000 (0:00:00.277) 0:00:00.277 ******** 2026-01-05 02:24:25.028133 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:24:25.028141 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:24:25.028147 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:24:25.028152 | orchestrator | 2026-01-05 02:24:25.028157 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 02:24:25.028163 | orchestrator | Monday 05 January 2026 02:23:43 +0000 (0:00:00.325) 0:00:00.603 ******** 2026-01-05 02:24:25.028168 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-01-05 02:24:25.028175 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-01-05 02:24:25.028181 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-01-05 02:24:25.028187 | orchestrator | 2026-01-05 02:24:25.028192 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-01-05 02:24:25.028198 | orchestrator | 2026-01-05 02:24:25.028204 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-01-05 02:24:25.028210 | orchestrator | Monday 05 January 2026 02:23:43 +0000 (0:00:00.543) 0:00:01.147 ******** 2026-01-05 02:24:25.028216 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 02:24:25.028224 | orchestrator | 2026-01-05 02:24:25.028230 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-01-05 02:24:25.028236 | orchestrator | Monday 05 January 2026 02:23:44 +0000 (0:00:00.582) 0:00:01.730 ******** 2026-01-05 02:24:25.028243 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-01-05 02:24:25.028249 | orchestrator | 2026-01-05 02:24:25.028254 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-01-05 02:24:25.028261 | orchestrator | Monday 05 January 2026 02:23:48 +0000 (0:00:03.961) 0:00:05.691 ******** 2026-01-05 02:24:25.028266 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-01-05 02:24:25.028273 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-01-05 02:24:25.028279 | orchestrator | 2026-01-05 02:24:25.028285 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-01-05 02:24:25.028291 | orchestrator | Monday 05 January 2026 02:23:55 +0000 (0:00:07.386) 0:00:13.077 ******** 2026-01-05 02:24:25.028298 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-05 02:24:25.028305 | orchestrator | 2026-01-05 02:24:25.028311 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-01-05 02:24:25.028317 | orchestrator | Monday 05 January 2026 02:23:59 +0000 (0:00:03.723) 0:00:16.800 ******** 2026-01-05 02:24:25.028323 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-05 02:24:25.028330 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-01-05 02:24:25.028336 | orchestrator | 2026-01-05 02:24:25.028340 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-01-05 02:24:25.028344 | orchestrator | Monday 05 January 2026 02:24:03 +0000 (0:00:04.207) 0:00:21.007 ******** 2026-01-05 02:24:25.028348 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-05 02:24:25.028352 | orchestrator | 2026-01-05 02:24:25.028356 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-01-05 02:24:25.028359 | orchestrator | Monday 05 January 2026 02:24:07 +0000 (0:00:03.546) 0:00:24.554 ******** 2026-01-05 02:24:25.028363 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-01-05 02:24:25.028367 | orchestrator | 2026-01-05 02:24:25.028371 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-01-05 02:24:25.028374 | orchestrator | Monday 05 January 2026 02:24:11 +0000 (0:00:04.219) 0:00:28.773 ******** 2026-01-05 02:24:25.028378 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:24:25.028382 | orchestrator | 2026-01-05 02:24:25.028386 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-01-05 02:24:25.028390 | orchestrator | Monday 05 January 2026 02:24:15 +0000 (0:00:03.916) 0:00:32.690 ******** 2026-01-05 02:24:25.028400 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:24:25.028405 | orchestrator | 2026-01-05 02:24:25.028411 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-01-05 02:24:25.028417 | orchestrator | Monday 05 January 2026 02:24:19 +0000 (0:00:04.229) 0:00:36.919 ******** 2026-01-05 02:24:25.028423 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:24:25.028429 | orchestrator | 2026-01-05 02:24:25.028435 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-01-05 02:24:25.028441 | orchestrator | Monday 05 January 2026 02:24:23 +0000 (0:00:03.766) 0:00:40.685 ******** 2026-01-05 02:24:25.028469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-05 02:24:25.028485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-05 02:24:25.028493 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-05 02:24:25.028502 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 02:24:25.028517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 02:24:25.028530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 02:24:32.828989 | orchestrator | 2026-01-05 02:24:32.829090 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-01-05 02:24:32.829099 | orchestrator | Monday 05 January 2026 02:24:25 +0000 (0:00:01.705) 0:00:42.390 ******** 2026-01-05 02:24:32.829104 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:24:32.829111 | orchestrator | 2026-01-05 02:24:32.829116 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-01-05 02:24:32.829121 | orchestrator | Monday 05 January 2026 02:24:25 +0000 (0:00:00.139) 0:00:42.530 ******** 2026-01-05 02:24:32.829126 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:24:32.829131 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:24:32.829136 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:24:32.829141 | orchestrator | 2026-01-05 02:24:32.829146 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-01-05 02:24:32.829151 | orchestrator | Monday 05 January 2026 02:24:25 +0000 (0:00:00.335) 0:00:42.866 ******** 2026-01-05 02:24:32.829156 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 02:24:32.829161 | orchestrator | 2026-01-05 02:24:32.829173 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-01-05 02:24:32.829178 | orchestrator | Monday 05 January 2026 02:24:26 +0000 (0:00:00.849) 0:00:43.715 ******** 2026-01-05 02:24:32.829185 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-05 02:24:32.829194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-05 02:24:32.829216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-05 02:24:32.829238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 02:24:32.829245 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 02:24:32.829250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 02:24:32.829259 | orchestrator | 2026-01-05 02:24:32.829264 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-01-05 02:24:32.829269 | orchestrator | Monday 05 January 2026 02:24:28 +0000 (0:00:02.574) 0:00:46.290 ******** 2026-01-05 02:24:32.829274 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:24:32.829281 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:24:32.829286 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:24:32.829290 | orchestrator | 2026-01-05 02:24:32.829295 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-01-05 02:24:32.829300 | orchestrator | Monday 05 January 2026 02:24:29 +0000 (0:00:00.567) 0:00:46.857 ******** 2026-01-05 02:24:32.829305 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 02:24:32.829311 | orchestrator | 2026-01-05 02:24:32.829315 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-01-05 02:24:32.829320 | orchestrator | Monday 05 January 2026 02:24:30 +0000 (0:00:00.607) 0:00:47.465 ******** 2026-01-05 02:24:32.829325 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-05 02:24:32.829338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-05 02:24:33.728720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-05 02:24:33.728815 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 02:24:33.728854 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 02:24:33.728864 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 02:24:33.728906 | orchestrator | 2026-01-05 02:24:33.728921 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-01-05 02:24:33.728936 | orchestrator | Monday 05 January 2026 02:24:32 +0000 (0:00:02.736) 0:00:50.201 ******** 2026-01-05 02:24:33.728988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-05 02:24:33.728999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-05 02:24:33.729016 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:24:33.729026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-05 02:24:33.729035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-05 02:24:33.729043 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:24:33.729052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-05 02:24:33.729071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-05 02:24:37.442781 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:24:37.442904 | orchestrator | 2026-01-05 02:24:37.442916 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-01-05 02:24:37.442924 | orchestrator | Monday 05 January 2026 02:24:33 +0000 (0:00:00.892) 0:00:51.094 ******** 2026-01-05 02:24:37.442933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-05 02:24:37.442960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-05 02:24:37.442968 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:24:37.442974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-05 02:24:37.442981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-05 02:24:37.442987 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:24:37.443019 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-05 02:24:37.443031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-05 02:24:37.443037 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:24:37.443043 | orchestrator | 2026-01-05 02:24:37.443050 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-01-05 02:24:37.443055 | orchestrator | Monday 05 January 2026 02:24:34 +0000 (0:00:00.904) 0:00:51.999 ******** 2026-01-05 02:24:37.443063 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-05 02:24:37.443070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-05 02:24:37.443086 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-05 02:24:43.960136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 02:24:43.960289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 02:24:43.960311 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 02:24:43.960327 | orchestrator | 2026-01-05 02:24:43.960338 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-01-05 02:24:43.960350 | orchestrator | Monday 05 January 2026 02:24:37 +0000 (0:00:02.814) 0:00:54.813 ******** 2026-01-05 02:24:43.960362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-05 02:24:43.960412 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-05 02:24:43.960453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-05 02:24:43.960466 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 02:24:43.960475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 02:24:43.960485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 02:24:43.960495 | orchestrator | 2026-01-05 02:24:43.960505 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-01-05 02:24:43.960515 | orchestrator | Monday 05 January 2026 02:24:43 +0000 (0:00:05.805) 0:01:00.618 ******** 2026-01-05 02:24:43.960548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-05 02:24:46.079843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-05 02:24:46.080121 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:24:46.080144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-05 02:24:46.080157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-05 02:24:46.080167 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:24:46.080194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-05 02:24:46.080251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-05 02:24:46.080267 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:24:46.080282 | orchestrator | 2026-01-05 02:24:46.080299 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-01-05 02:24:46.080314 | orchestrator | Monday 05 January 2026 02:24:43 +0000 (0:00:00.715) 0:01:01.334 ******** 2026-01-05 02:24:46.080327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-05 02:24:46.080339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-05 02:24:46.080350 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-05 02:24:46.080376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 02:24:46.080396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 02:25:44.574572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-05 02:25:44.574643 | orchestrator | 2026-01-05 02:25:44.574651 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-01-05 02:25:44.574657 | orchestrator | Monday 05 January 2026 02:24:46 +0000 (0:00:02.112) 0:01:03.447 ******** 2026-01-05 02:25:44.574662 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:25:44.574667 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:25:44.574671 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:25:44.574675 | orchestrator | 2026-01-05 02:25:44.574679 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-01-05 02:25:44.574683 | orchestrator | Monday 05 January 2026 02:24:46 +0000 (0:00:00.550) 0:01:03.997 ******** 2026-01-05 02:25:44.574687 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:25:44.574691 | orchestrator | 2026-01-05 02:25:44.574695 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-01-05 02:25:44.574698 | orchestrator | Monday 05 January 2026 02:24:49 +0000 (0:00:02.439) 0:01:06.436 ******** 2026-01-05 02:25:44.574702 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:25:44.574706 | orchestrator | 2026-01-05 02:25:44.574710 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-01-05 02:25:44.574714 | orchestrator | Monday 05 January 2026 02:24:51 +0000 (0:00:02.644) 0:01:09.081 ******** 2026-01-05 02:25:44.574717 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:25:44.574721 | orchestrator | 2026-01-05 02:25:44.574739 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-01-05 02:25:44.574743 | orchestrator | Monday 05 January 2026 02:25:09 +0000 (0:00:17.711) 0:01:26.792 ******** 2026-01-05 02:25:44.574747 | orchestrator | 2026-01-05 02:25:44.574751 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-01-05 02:25:44.574754 | orchestrator | Monday 05 January 2026 02:25:09 +0000 (0:00:00.082) 0:01:26.875 ******** 2026-01-05 02:25:44.574758 | orchestrator | 2026-01-05 02:25:44.574762 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-01-05 02:25:44.574766 | orchestrator | Monday 05 January 2026 02:25:09 +0000 (0:00:00.085) 0:01:26.960 ******** 2026-01-05 02:25:44.574770 | orchestrator | 2026-01-05 02:25:44.574774 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-01-05 02:25:44.574777 | orchestrator | Monday 05 January 2026 02:25:09 +0000 (0:00:00.080) 0:01:27.041 ******** 2026-01-05 02:25:44.574781 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:25:44.574785 | orchestrator | changed: [testbed-node-1] 2026-01-05 02:25:44.574789 | orchestrator | changed: [testbed-node-2] 2026-01-05 02:25:44.574792 | orchestrator | 2026-01-05 02:25:44.574796 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-01-05 02:25:44.574800 | orchestrator | Monday 05 January 2026 02:25:28 +0000 (0:00:18.951) 0:01:45.992 ******** 2026-01-05 02:25:44.574804 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:25:44.574807 | orchestrator | changed: [testbed-node-2] 2026-01-05 02:25:44.574811 | orchestrator | changed: [testbed-node-1] 2026-01-05 02:25:44.574815 | orchestrator | 2026-01-05 02:25:44.574819 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 02:25:44.574824 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-05 02:25:44.574837 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-05 02:25:44.574841 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-05 02:25:44.574868 | orchestrator | 2026-01-05 02:25:44.574875 | orchestrator | 2026-01-05 02:25:44.574882 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 02:25:44.574888 | orchestrator | Monday 05 January 2026 02:25:44 +0000 (0:00:15.588) 0:02:01.581 ******** 2026-01-05 02:25:44.574894 | orchestrator | =============================================================================== 2026-01-05 02:25:44.574900 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 18.95s 2026-01-05 02:25:44.574906 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 17.71s 2026-01-05 02:25:44.574912 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 15.59s 2026-01-05 02:25:44.574918 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 7.39s 2026-01-05 02:25:44.574924 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.81s 2026-01-05 02:25:44.574930 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.23s 2026-01-05 02:25:44.574933 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.22s 2026-01-05 02:25:44.574950 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.21s 2026-01-05 02:25:44.574954 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.96s 2026-01-05 02:25:44.574958 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.92s 2026-01-05 02:25:44.574963 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.77s 2026-01-05 02:25:44.574970 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.72s 2026-01-05 02:25:44.574976 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.55s 2026-01-05 02:25:44.574989 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.81s 2026-01-05 02:25:44.574995 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.74s 2026-01-05 02:25:44.575000 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.64s 2026-01-05 02:25:44.575006 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.57s 2026-01-05 02:25:44.575011 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.44s 2026-01-05 02:25:44.575017 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.11s 2026-01-05 02:25:44.575023 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.71s 2026-01-05 02:25:45.334928 | orchestrator | ok: Runtime: 1:43:47.367691 2026-01-05 02:25:45.579447 | 2026-01-05 02:25:45.579595 | TASK [Deploy in a nutshell] 2026-01-05 02:25:46.118066 | orchestrator | skipping: Conditional result was False 2026-01-05 02:25:46.144937 | 2026-01-05 02:25:46.145085 | TASK [Bootstrap services] 2026-01-05 02:25:46.880547 | orchestrator | 2026-01-05 02:25:46.880705 | orchestrator | # BOOTSTRAP 2026-01-05 02:25:46.880716 | orchestrator | 2026-01-05 02:25:46.880722 | orchestrator | + set -e 2026-01-05 02:25:46.880727 | orchestrator | + echo 2026-01-05 02:25:46.880733 | orchestrator | + echo '# BOOTSTRAP' 2026-01-05 02:25:46.880741 | orchestrator | + echo 2026-01-05 02:25:46.880767 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-01-05 02:25:46.888541 | orchestrator | + set -e 2026-01-05 02:25:46.888961 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-01-05 02:25:49.080915 | orchestrator | 2026-01-05 02:25:49 | INFO  | It takes a moment until task ca4d64a7-ba6b-4af5-9543-688e8291af4e (flavor-manager) has been started and output is visible here. 2026-01-05 02:25:57.269567 | orchestrator | 2026-01-05 02:25:52 | INFO  | Flavor SCS-1L-1 created 2026-01-05 02:25:57.269688 | orchestrator | 2026-01-05 02:25:52 | INFO  | Flavor SCS-1L-1-5 created 2026-01-05 02:25:57.269699 | orchestrator | 2026-01-05 02:25:53 | INFO  | Flavor SCS-1V-2 created 2026-01-05 02:25:57.269704 | orchestrator | 2026-01-05 02:25:53 | INFO  | Flavor SCS-1V-2-5 created 2026-01-05 02:25:57.269710 | orchestrator | 2026-01-05 02:25:53 | INFO  | Flavor SCS-1V-4 created 2026-01-05 02:25:57.269715 | orchestrator | 2026-01-05 02:25:53 | INFO  | Flavor SCS-1V-4-10 created 2026-01-05 02:25:57.269720 | orchestrator | 2026-01-05 02:25:53 | INFO  | Flavor SCS-1V-8 created 2026-01-05 02:25:57.269725 | orchestrator | 2026-01-05 02:25:53 | INFO  | Flavor SCS-1V-8-20 created 2026-01-05 02:25:57.269736 | orchestrator | 2026-01-05 02:25:54 | INFO  | Flavor SCS-2V-4 created 2026-01-05 02:25:57.269740 | orchestrator | 2026-01-05 02:25:54 | INFO  | Flavor SCS-2V-4-10 created 2026-01-05 02:25:57.269745 | orchestrator | 2026-01-05 02:25:54 | INFO  | Flavor SCS-2V-8 created 2026-01-05 02:25:57.269749 | orchestrator | 2026-01-05 02:25:54 | INFO  | Flavor SCS-2V-8-20 created 2026-01-05 02:25:57.269753 | orchestrator | 2026-01-05 02:25:54 | INFO  | Flavor SCS-2V-16 created 2026-01-05 02:25:57.269758 | orchestrator | 2026-01-05 02:25:54 | INFO  | Flavor SCS-2V-16-50 created 2026-01-05 02:25:57.269764 | orchestrator | 2026-01-05 02:25:54 | INFO  | Flavor SCS-4V-8 created 2026-01-05 02:25:57.269771 | orchestrator | 2026-01-05 02:25:55 | INFO  | Flavor SCS-4V-8-20 created 2026-01-05 02:25:57.269777 | orchestrator | 2026-01-05 02:25:55 | INFO  | Flavor SCS-4V-16 created 2026-01-05 02:25:57.269783 | orchestrator | 2026-01-05 02:25:55 | INFO  | Flavor SCS-4V-16-50 created 2026-01-05 02:25:57.269791 | orchestrator | 2026-01-05 02:25:55 | INFO  | Flavor SCS-4V-32 created 2026-01-05 02:25:57.269796 | orchestrator | 2026-01-05 02:25:55 | INFO  | Flavor SCS-4V-32-100 created 2026-01-05 02:25:57.269810 | orchestrator | 2026-01-05 02:25:55 | INFO  | Flavor SCS-8V-16 created 2026-01-05 02:25:57.269815 | orchestrator | 2026-01-05 02:25:55 | INFO  | Flavor SCS-8V-16-50 created 2026-01-05 02:25:57.269825 | orchestrator | 2026-01-05 02:25:56 | INFO  | Flavor SCS-8V-32 created 2026-01-05 02:25:57.269830 | orchestrator | 2026-01-05 02:25:56 | INFO  | Flavor SCS-8V-32-100 created 2026-01-05 02:25:57.269834 | orchestrator | 2026-01-05 02:25:56 | INFO  | Flavor SCS-16V-32 created 2026-01-05 02:25:57.269839 | orchestrator | 2026-01-05 02:25:56 | INFO  | Flavor SCS-16V-32-100 created 2026-01-05 02:25:57.269843 | orchestrator | 2026-01-05 02:25:56 | INFO  | Flavor SCS-2V-4-20s created 2026-01-05 02:25:57.269848 | orchestrator | 2026-01-05 02:25:56 | INFO  | Flavor SCS-4V-8-50s created 2026-01-05 02:25:57.269852 | orchestrator | 2026-01-05 02:25:56 | INFO  | Flavor SCS-8V-32-100s created 2026-01-05 02:25:59.617315 | orchestrator | 2026-01-05 02:25:59 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-01-05 02:26:09.880799 | orchestrator | 2026-01-05 02:26:09 | INFO  | Task 9252aa4d-38e4-4dbf-8076-8963bddeb5eb (bootstrap-basic) was prepared for execution. 2026-01-05 02:26:09.881013 | orchestrator | 2026-01-05 02:26:09 | INFO  | It takes a moment until task 9252aa4d-38e4-4dbf-8076-8963bddeb5eb (bootstrap-basic) has been started and output is visible here. 2026-01-05 02:26:55.001802 | orchestrator | 2026-01-05 02:26:55.001906 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-01-05 02:26:55.001920 | orchestrator | 2026-01-05 02:26:55.001930 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-05 02:26:55.001939 | orchestrator | Monday 05 January 2026 02:26:14 +0000 (0:00:00.095) 0:00:00.096 ******** 2026-01-05 02:26:55.001949 | orchestrator | ok: [localhost] 2026-01-05 02:26:55.001958 | orchestrator | 2026-01-05 02:26:55.001967 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-01-05 02:26:55.001975 | orchestrator | Monday 05 January 2026 02:26:16 +0000 (0:00:01.988) 0:00:02.084 ******** 2026-01-05 02:26:55.001984 | orchestrator | ok: [localhost] 2026-01-05 02:26:55.001992 | orchestrator | 2026-01-05 02:26:55.002000 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-01-05 02:26:55.002009 | orchestrator | Monday 05 January 2026 02:26:23 +0000 (0:00:07.254) 0:00:09.338 ******** 2026-01-05 02:26:55.002071 | orchestrator | changed: [localhost] 2026-01-05 02:26:55.002081 | orchestrator | 2026-01-05 02:26:55.002089 | orchestrator | TASK [Create public network] *************************************************** 2026-01-05 02:26:55.002098 | orchestrator | Monday 05 January 2026 02:26:30 +0000 (0:00:06.638) 0:00:15.976 ******** 2026-01-05 02:26:55.002106 | orchestrator | changed: [localhost] 2026-01-05 02:26:55.002115 | orchestrator | 2026-01-05 02:26:55.002196 | orchestrator | TASK [Set public network to default] ******************************************* 2026-01-05 02:26:55.002208 | orchestrator | Monday 05 January 2026 02:26:36 +0000 (0:00:05.431) 0:00:21.408 ******** 2026-01-05 02:26:55.002221 | orchestrator | changed: [localhost] 2026-01-05 02:26:55.002230 | orchestrator | 2026-01-05 02:26:55.002239 | orchestrator | TASK [Create public subnet] **************************************************** 2026-01-05 02:26:55.002247 | orchestrator | Monday 05 January 2026 02:26:42 +0000 (0:00:06.683) 0:00:28.092 ******** 2026-01-05 02:26:55.002256 | orchestrator | changed: [localhost] 2026-01-05 02:26:55.002264 | orchestrator | 2026-01-05 02:26:55.002273 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-01-05 02:26:55.002281 | orchestrator | Monday 05 January 2026 02:26:47 +0000 (0:00:04.477) 0:00:32.569 ******** 2026-01-05 02:26:55.002290 | orchestrator | changed: [localhost] 2026-01-05 02:26:55.002299 | orchestrator | 2026-01-05 02:26:55.002312 | orchestrator | TASK [Create manager role] ***************************************************** 2026-01-05 02:26:55.002334 | orchestrator | Monday 05 January 2026 02:26:51 +0000 (0:00:03.826) 0:00:36.396 ******** 2026-01-05 02:26:55.002345 | orchestrator | ok: [localhost] 2026-01-05 02:26:55.002355 | orchestrator | 2026-01-05 02:26:55.002364 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 02:26:55.002374 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 02:26:55.002385 | orchestrator | 2026-01-05 02:26:55.002394 | orchestrator | 2026-01-05 02:26:55.002404 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 02:26:55.002414 | orchestrator | Monday 05 January 2026 02:26:54 +0000 (0:00:03.689) 0:00:40.086 ******** 2026-01-05 02:26:55.002423 | orchestrator | =============================================================================== 2026-01-05 02:26:55.002433 | orchestrator | Get volume type LUKS ---------------------------------------------------- 7.25s 2026-01-05 02:26:55.002443 | orchestrator | Set public network to default ------------------------------------------- 6.68s 2026-01-05 02:26:55.002454 | orchestrator | Create volume type LUKS ------------------------------------------------- 6.64s 2026-01-05 02:26:55.002463 | orchestrator | Create public network --------------------------------------------------- 5.43s 2026-01-05 02:26:55.002491 | orchestrator | Create public subnet ---------------------------------------------------- 4.48s 2026-01-05 02:26:55.002499 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.83s 2026-01-05 02:26:55.002508 | orchestrator | Create manager role ----------------------------------------------------- 3.69s 2026-01-05 02:26:55.002516 | orchestrator | Gathering Facts --------------------------------------------------------- 1.99s 2026-01-05 02:26:57.505770 | orchestrator | 2026-01-05 02:26:57 | INFO  | It takes a moment until task e5dcb1a1-3c59-45d5-a601-f2214b30efb7 (image-manager) has been started and output is visible here. 2026-01-05 02:27:38.232501 | orchestrator | 2026-01-05 02:27:00 | INFO  | Processing image 'Cirros 0.6.2' 2026-01-05 02:27:38.232584 | orchestrator | 2026-01-05 02:27:00 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-01-05 02:27:38.232592 | orchestrator | 2026-01-05 02:27:00 | INFO  | Importing image Cirros 0.6.2 2026-01-05 02:27:38.232597 | orchestrator | 2026-01-05 02:27:00 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-01-05 02:27:38.232602 | orchestrator | 2026-01-05 02:27:02 | INFO  | Waiting for image to leave queued state... 2026-01-05 02:27:38.232608 | orchestrator | 2026-01-05 02:27:04 | INFO  | Waiting for import to complete... 2026-01-05 02:27:38.232612 | orchestrator | 2026-01-05 02:27:14 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-01-05 02:27:38.232616 | orchestrator | 2026-01-05 02:27:15 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-01-05 02:27:38.232620 | orchestrator | 2026-01-05 02:27:15 | INFO  | Setting internal_version = 0.6.2 2026-01-05 02:27:38.232624 | orchestrator | 2026-01-05 02:27:15 | INFO  | Setting image_original_user = cirros 2026-01-05 02:27:38.232629 | orchestrator | 2026-01-05 02:27:15 | INFO  | Adding tag os:cirros 2026-01-05 02:27:38.232633 | orchestrator | 2026-01-05 02:27:15 | INFO  | Setting property architecture: x86_64 2026-01-05 02:27:38.232637 | orchestrator | 2026-01-05 02:27:15 | INFO  | Setting property hw_disk_bus: scsi 2026-01-05 02:27:38.232640 | orchestrator | 2026-01-05 02:27:15 | INFO  | Setting property hw_rng_model: virtio 2026-01-05 02:27:38.232644 | orchestrator | 2026-01-05 02:27:16 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-01-05 02:27:38.232648 | orchestrator | 2026-01-05 02:27:16 | INFO  | Setting property hw_watchdog_action: reset 2026-01-05 02:27:38.232653 | orchestrator | 2026-01-05 02:27:16 | INFO  | Setting property hypervisor_type: qemu 2026-01-05 02:27:38.232657 | orchestrator | 2026-01-05 02:27:16 | INFO  | Setting property os_distro: cirros 2026-01-05 02:27:38.232660 | orchestrator | 2026-01-05 02:27:16 | INFO  | Setting property os_purpose: minimal 2026-01-05 02:27:38.232664 | orchestrator | 2026-01-05 02:27:17 | INFO  | Setting property replace_frequency: never 2026-01-05 02:27:38.232668 | orchestrator | 2026-01-05 02:27:17 | INFO  | Setting property uuid_validity: none 2026-01-05 02:27:38.232672 | orchestrator | 2026-01-05 02:27:17 | INFO  | Setting property provided_until: none 2026-01-05 02:27:38.232676 | orchestrator | 2026-01-05 02:27:17 | INFO  | Setting property image_description: Cirros 2026-01-05 02:27:38.232679 | orchestrator | 2026-01-05 02:27:17 | INFO  | Setting property image_name: Cirros 2026-01-05 02:27:38.232683 | orchestrator | 2026-01-05 02:27:18 | INFO  | Setting property internal_version: 0.6.2 2026-01-05 02:27:38.232687 | orchestrator | 2026-01-05 02:27:18 | INFO  | Setting property image_original_user: cirros 2026-01-05 02:27:38.232707 | orchestrator | 2026-01-05 02:27:18 | INFO  | Setting property os_version: 0.6.2 2026-01-05 02:27:38.232717 | orchestrator | 2026-01-05 02:27:19 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-01-05 02:27:38.232722 | orchestrator | 2026-01-05 02:27:19 | INFO  | Setting property image_build_date: 2023-05-30 2026-01-05 02:27:38.232726 | orchestrator | 2026-01-05 02:27:19 | INFO  | Checking status of 'Cirros 0.6.2' 2026-01-05 02:27:38.232730 | orchestrator | 2026-01-05 02:27:19 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-01-05 02:27:38.232734 | orchestrator | 2026-01-05 02:27:19 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-01-05 02:27:38.232738 | orchestrator | 2026-01-05 02:27:19 | INFO  | Processing image 'Cirros 0.6.3' 2026-01-05 02:27:38.232745 | orchestrator | 2026-01-05 02:27:19 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-01-05 02:27:38.232749 | orchestrator | 2026-01-05 02:27:19 | INFO  | Importing image Cirros 0.6.3 2026-01-05 02:27:38.232753 | orchestrator | 2026-01-05 02:27:19 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-01-05 02:27:38.232757 | orchestrator | 2026-01-05 02:27:20 | INFO  | Waiting for image to leave queued state... 2026-01-05 02:27:38.232760 | orchestrator | 2026-01-05 02:27:22 | INFO  | Waiting for import to complete... 2026-01-05 02:27:38.232774 | orchestrator | 2026-01-05 02:27:32 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-01-05 02:27:38.232779 | orchestrator | 2026-01-05 02:27:32 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-01-05 02:27:38.232782 | orchestrator | 2026-01-05 02:27:32 | INFO  | Setting internal_version = 0.6.3 2026-01-05 02:27:38.232786 | orchestrator | 2026-01-05 02:27:32 | INFO  | Setting image_original_user = cirros 2026-01-05 02:27:38.232790 | orchestrator | 2026-01-05 02:27:32 | INFO  | Adding tag os:cirros 2026-01-05 02:27:38.232794 | orchestrator | 2026-01-05 02:27:33 | INFO  | Setting property architecture: x86_64 2026-01-05 02:27:38.232798 | orchestrator | 2026-01-05 02:27:33 | INFO  | Setting property hw_disk_bus: scsi 2026-01-05 02:27:38.232801 | orchestrator | 2026-01-05 02:27:33 | INFO  | Setting property hw_rng_model: virtio 2026-01-05 02:27:38.232805 | orchestrator | 2026-01-05 02:27:33 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-01-05 02:27:38.232809 | orchestrator | 2026-01-05 02:27:34 | INFO  | Setting property hw_watchdog_action: reset 2026-01-05 02:27:38.232813 | orchestrator | 2026-01-05 02:27:34 | INFO  | Setting property hypervisor_type: qemu 2026-01-05 02:27:38.232817 | orchestrator | 2026-01-05 02:27:34 | INFO  | Setting property os_distro: cirros 2026-01-05 02:27:38.232820 | orchestrator | 2026-01-05 02:27:34 | INFO  | Setting property os_purpose: minimal 2026-01-05 02:27:38.232824 | orchestrator | 2026-01-05 02:27:35 | INFO  | Setting property replace_frequency: never 2026-01-05 02:27:38.232828 | orchestrator | 2026-01-05 02:27:35 | INFO  | Setting property uuid_validity: none 2026-01-05 02:27:38.232832 | orchestrator | 2026-01-05 02:27:35 | INFO  | Setting property provided_until: none 2026-01-05 02:27:38.232836 | orchestrator | 2026-01-05 02:27:35 | INFO  | Setting property image_description: Cirros 2026-01-05 02:27:38.232840 | orchestrator | 2026-01-05 02:27:35 | INFO  | Setting property image_name: Cirros 2026-01-05 02:27:38.232843 | orchestrator | 2026-01-05 02:27:36 | INFO  | Setting property internal_version: 0.6.3 2026-01-05 02:27:38.232851 | orchestrator | 2026-01-05 02:27:36 | INFO  | Setting property image_original_user: cirros 2026-01-05 02:27:38.232854 | orchestrator | 2026-01-05 02:27:36 | INFO  | Setting property os_version: 0.6.3 2026-01-05 02:27:38.232858 | orchestrator | 2026-01-05 02:27:36 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-01-05 02:27:38.232862 | orchestrator | 2026-01-05 02:27:37 | INFO  | Setting property image_build_date: 2024-09-26 2026-01-05 02:27:38.232866 | orchestrator | 2026-01-05 02:27:37 | INFO  | Checking status of 'Cirros 0.6.3' 2026-01-05 02:27:38.232870 | orchestrator | 2026-01-05 02:27:37 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-01-05 02:27:38.232874 | orchestrator | 2026-01-05 02:27:37 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-01-05 02:27:38.583931 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-01-05 02:27:41.269123 | orchestrator | 2026-01-05 02:27:41 | INFO  | date: 2026-01-04 2026-01-05 02:27:41.269253 | orchestrator | 2026-01-05 02:27:41 | INFO  | image: octavia-amphora-haproxy-2024.2.20260104.qcow2 2026-01-05 02:27:41.269479 | orchestrator | 2026-01-05 02:27:41 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260104.qcow2 2026-01-05 02:27:41.269763 | orchestrator | 2026-01-05 02:27:41 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260104.qcow2.CHECKSUM 2026-01-05 02:27:41.504000 | orchestrator | 2026-01-05 02:27:41 | INFO  | checksum: efe91d4646b3899561e95b1c77d6d6bc98459aee738b3292e0742e3de3cdee03 2026-01-05 02:27:41.577189 | orchestrator | 2026-01-05 02:27:41 | INFO  | It takes a moment until task 478219c1-6cbe-45a9-b851-3bfad09c0f7e (image-manager) has been started and output is visible here. 2026-01-05 02:29:34.314959 | orchestrator | 2026-01-05 02:27:43 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-01-04' 2026-01-05 02:29:34.315070 | orchestrator | 2026-01-05 02:27:43 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260104.qcow2: 200 2026-01-05 02:29:34.315084 | orchestrator | 2026-01-05 02:27:43 | INFO  | Importing image OpenStack Octavia Amphora 2026-01-04 2026-01-05 02:29:34.315094 | orchestrator | 2026-01-05 02:27:43 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260104.qcow2 2026-01-05 02:29:34.315104 | orchestrator | 2026-01-05 02:27:45 | INFO  | Waiting for image to leave queued state... 2026-01-05 02:29:34.315112 | orchestrator | 2026-01-05 02:27:47 | INFO  | Waiting for import to complete... 2026-01-05 02:29:34.315121 | orchestrator | 2026-01-05 02:27:57 | INFO  | Waiting for import to complete... 2026-01-05 02:29:34.315129 | orchestrator | 2026-01-05 02:28:08 | INFO  | Waiting for import to complete... 2026-01-05 02:29:34.315137 | orchestrator | 2026-01-05 02:28:18 | INFO  | Waiting for import to complete... 2026-01-05 02:29:34.315148 | orchestrator | 2026-01-05 02:28:28 | INFO  | Waiting for import to complete... 2026-01-05 02:29:34.315157 | orchestrator | 2026-01-05 02:28:38 | INFO  | Waiting for import to complete... 2026-01-05 02:29:34.315166 | orchestrator | 2026-01-05 02:28:48 | INFO  | Waiting for import to complete... 2026-01-05 02:29:34.315173 | orchestrator | 2026-01-05 02:28:58 | INFO  | Waiting for import to complete... 2026-01-05 02:29:34.315181 | orchestrator | 2026-01-05 02:29:08 | INFO  | Waiting for import to complete... 2026-01-05 02:29:34.315210 | orchestrator | 2026-01-05 02:29:18 | INFO  | Waiting for import to complete... 2026-01-05 02:29:34.315219 | orchestrator | 2026-01-05 02:29:29 | INFO  | Import of 'OpenStack Octavia Amphora 2026-01-04' successfully completed, reloading images 2026-01-05 02:29:34.315228 | orchestrator | 2026-01-05 02:29:29 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2026-01-04' 2026-01-05 02:29:34.315236 | orchestrator | 2026-01-05 02:29:29 | INFO  | Setting internal_version = 2026-01-04 2026-01-05 02:29:34.315244 | orchestrator | 2026-01-05 02:29:29 | INFO  | Setting image_original_user = ubuntu 2026-01-05 02:29:34.315253 | orchestrator | 2026-01-05 02:29:29 | INFO  | Adding tag amphora 2026-01-05 02:29:34.315262 | orchestrator | 2026-01-05 02:29:29 | INFO  | Adding tag os:ubuntu 2026-01-05 02:29:34.315275 | orchestrator | 2026-01-05 02:29:29 | INFO  | Setting property architecture: x86_64 2026-01-05 02:29:34.315289 | orchestrator | 2026-01-05 02:29:30 | INFO  | Setting property hw_disk_bus: scsi 2026-01-05 02:29:34.315302 | orchestrator | 2026-01-05 02:29:30 | INFO  | Setting property hw_rng_model: virtio 2026-01-05 02:29:34.315315 | orchestrator | 2026-01-05 02:29:30 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-01-05 02:29:34.315329 | orchestrator | 2026-01-05 02:29:30 | INFO  | Setting property hw_watchdog_action: reset 2026-01-05 02:29:34.315342 | orchestrator | 2026-01-05 02:29:30 | INFO  | Setting property hypervisor_type: qemu 2026-01-05 02:29:34.315355 | orchestrator | 2026-01-05 02:29:31 | INFO  | Setting property os_distro: ubuntu 2026-01-05 02:29:34.315369 | orchestrator | 2026-01-05 02:29:31 | INFO  | Setting property replace_frequency: quarterly 2026-01-05 02:29:34.315401 | orchestrator | 2026-01-05 02:29:31 | INFO  | Setting property uuid_validity: last-1 2026-01-05 02:29:34.315416 | orchestrator | 2026-01-05 02:29:31 | INFO  | Setting property provided_until: none 2026-01-05 02:29:34.315430 | orchestrator | 2026-01-05 02:29:32 | INFO  | Setting property os_purpose: network 2026-01-05 02:29:34.315445 | orchestrator | 2026-01-05 02:29:32 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2026-01-05 02:29:34.315459 | orchestrator | 2026-01-05 02:29:32 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2026-01-05 02:29:34.315474 | orchestrator | 2026-01-05 02:29:32 | INFO  | Setting property internal_version: 2026-01-04 2026-01-05 02:29:34.315488 | orchestrator | 2026-01-05 02:29:32 | INFO  | Setting property image_original_user: ubuntu 2026-01-05 02:29:34.315501 | orchestrator | 2026-01-05 02:29:33 | INFO  | Setting property os_version: 2026-01-04 2026-01-05 02:29:34.315512 | orchestrator | 2026-01-05 02:29:33 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260104.qcow2 2026-01-05 02:29:34.315539 | orchestrator | 2026-01-05 02:29:33 | INFO  | Setting property image_build_date: 2026-01-04 2026-01-05 02:29:34.315550 | orchestrator | 2026-01-05 02:29:33 | INFO  | Checking status of 'OpenStack Octavia Amphora 2026-01-04' 2026-01-05 02:29:34.315559 | orchestrator | 2026-01-05 02:29:33 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2026-01-04' 2026-01-05 02:29:34.315569 | orchestrator | 2026-01-05 02:29:34 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-01-05 02:29:34.315578 | orchestrator | 2026-01-05 02:29:34 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-01-05 02:29:34.315589 | orchestrator | 2026-01-05 02:29:34 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-01-05 02:29:34.315607 | orchestrator | 2026-01-05 02:29:34 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-01-05 02:29:34.836309 | orchestrator | ok: Runtime: 0:03:48.187589 2026-01-05 02:29:34.851776 | 2026-01-05 02:29:34.851938 | TASK [Run checks] 2026-01-05 02:29:35.605005 | orchestrator | + set -e 2026-01-05 02:29:35.605165 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-05 02:29:35.605185 | orchestrator | ++ export INTERACTIVE=false 2026-01-05 02:29:35.605198 | orchestrator | ++ INTERACTIVE=false 2026-01-05 02:29:35.605208 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-05 02:29:35.605215 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-05 02:29:35.605224 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-01-05 02:29:35.606472 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-01-05 02:29:35.613948 | orchestrator | 2026-01-05 02:29:35.614085 | orchestrator | # CHECK 2026-01-05 02:29:35.614096 | orchestrator | 2026-01-05 02:29:35.614101 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-01-05 02:29:35.614110 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-01-05 02:29:35.614114 | orchestrator | + echo 2026-01-05 02:29:35.614119 | orchestrator | + echo '# CHECK' 2026-01-05 02:29:35.614123 | orchestrator | + echo 2026-01-05 02:29:35.614131 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-01-05 02:29:35.614568 | orchestrator | ++ semver 9.5.0 5.0.0 2026-01-05 02:29:35.686280 | orchestrator | 2026-01-05 02:29:35.686359 | orchestrator | ## Containers @ testbed-manager 2026-01-05 02:29:35.686366 | orchestrator | 2026-01-05 02:29:35.686373 | orchestrator | + [[ 1 -eq -1 ]] 2026-01-05 02:29:35.686378 | orchestrator | + echo 2026-01-05 02:29:35.686383 | orchestrator | + echo '## Containers @ testbed-manager' 2026-01-05 02:29:35.686388 | orchestrator | + echo 2026-01-05 02:29:35.686392 | orchestrator | + osism container testbed-manager ps 2026-01-05 02:29:37.645310 | orchestrator | 2026-01-05 02:29:37 | INFO  | Creating empty known_hosts file: /share/known_hosts 2026-01-05 02:29:38.067842 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-01-05 02:29:38.067960 | orchestrator | 22410b80dfe2 registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_blackbox_exporter 2026-01-05 02:29:38.067975 | orchestrator | dea9e3343bb2 registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_alertmanager 2026-01-05 02:29:38.067981 | orchestrator | 47fe1c2ef762 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_cadvisor 2026-01-05 02:29:38.067987 | orchestrator | 14756aadefa7 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-01-05 02:29:38.067992 | orchestrator | 962d44dd27a2 registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_server 2026-01-05 02:29:38.068002 | orchestrator | e2a89ae95cec registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" 59 minutes ago Up 58 minutes cephclient 2026-01-05 02:29:38.068008 | orchestrator | 31784c38dcf4 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-01-05 02:29:38.068014 | orchestrator | e240b77f482e registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-01-05 02:29:38.068038 | orchestrator | c82c36397345 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-01-05 02:29:38.068044 | orchestrator | 28c6c7600c4a registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 2 hours ago Up 2 hours openstackclient 2026-01-05 02:29:38.068050 | orchestrator | 5d87ff8093d0 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 2 hours ago Up 2 hours (healthy) 80/tcp phpmyadmin 2026-01-05 02:29:38.068055 | orchestrator | 75eba7bc57d5 registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 2 hours ago Up 2 hours (healthy) 8080/tcp homer 2026-01-05 02:29:38.068062 | orchestrator | f6c0bc9402f4 registry.osism.tech/osism/cgit:1.2.3 "httpd-foreground" 2 hours ago Up 2 hours 80/tcp cgit 2026-01-05 02:29:38.068067 | orchestrator | fc111742d94f registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:3128->3128/tcp squid 2026-01-05 02:29:38.068089 | orchestrator | 02f280c4a64f registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" 2 hours ago Up 2 hours (healthy) manager-inventory_reconciler-1 2026-01-05 02:29:38.068101 | orchestrator | 211faeb143ca registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-kubernetes 2026-01-05 02:29:38.068107 | orchestrator | 9dfbe2d20616 registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-ansible 2026-01-05 02:29:38.068112 | orchestrator | 018346edb57c registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) kolla-ansible 2026-01-05 02:29:38.068117 | orchestrator | 71aed3ac5601 registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) ceph-ansible 2026-01-05 02:29:38.068123 | orchestrator | a4aeead52dd9 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" 2 hours ago Up 2 hours (healthy) 8000/tcp manager-ara-server-1 2026-01-05 02:29:38.068128 | orchestrator | d85bc65f4ba2 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-01-05 02:29:38.068134 | orchestrator | 3b39878e739c registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" 2 hours ago Up 2 hours (healthy) osismclient 2026-01-05 02:29:38.068139 | orchestrator | 15d4f3e5fefc registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-openstack-1 2026-01-05 02:29:38.068149 | orchestrator | 91b9804e5762 registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" 2 hours ago Up 2 hours 192.168.16.5:3000->3000/tcp osism-frontend 2026-01-05 02:29:38.068154 | orchestrator | ec9a22072f76 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-flower-1 2026-01-05 02:29:38.068160 | orchestrator | bf3889e665a3 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-beat-1 2026-01-05 02:29:38.068165 | orchestrator | 1cb56e698a49 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 3306/tcp manager-mariadb-1 2026-01-05 02:29:38.068171 | orchestrator | 0e11c4eadf4e registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-listener-1 2026-01-05 02:29:38.068176 | orchestrator | 5534d3001b42 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 6379/tcp manager-redis-1 2026-01-05 02:29:38.068185 | orchestrator | 0a7c77253ad3 registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-01-05 02:29:38.385573 | orchestrator | 2026-01-05 02:29:38.385688 | orchestrator | ## Images @ testbed-manager 2026-01-05 02:29:38.385705 | orchestrator | 2026-01-05 02:29:38.385746 | orchestrator | + echo 2026-01-05 02:29:38.385760 | orchestrator | + echo '## Images @ testbed-manager' 2026-01-05 02:29:38.385773 | orchestrator | + echo 2026-01-05 02:29:38.385790 | orchestrator | + osism container testbed-manager images 2026-01-05 02:29:40.781333 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-01-05 02:29:40.781433 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 f5a6cc51123f 23 hours ago 238MB 2026-01-05 02:29:40.781443 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 4 weeks ago 11.5MB 2026-01-05 02:29:40.781451 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20251130.0 0f140ec71e5f 5 weeks ago 608MB 2026-01-05 02:29:40.781457 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 5 weeks ago 669MB 2026-01-05 02:29:40.781464 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 5 weeks ago 265MB 2026-01-05 02:29:40.781471 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 5 weeks ago 578MB 2026-01-05 02:29:40.781477 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20251130 7bbb4f6f4831 5 weeks ago 308MB 2026-01-05 02:29:40.781486 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 5 weeks ago 357MB 2026-01-05 02:29:40.781492 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20251130 ba994ea4acda 5 weeks ago 404MB 2026-01-05 02:29:40.781519 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20251130 56b43d5c716a 5 weeks ago 839MB 2026-01-05 02:29:40.781526 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 5 weeks ago 305MB 2026-01-05 02:29:40.781532 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20251130.0 1bfc1dadeee1 5 weeks ago 330MB 2026-01-05 02:29:40.781539 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20251130.0 42988b2d229c 5 weeks ago 613MB 2026-01-05 02:29:40.781546 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20251130.0 a212d8ca4a50 5 weeks ago 560MB 2026-01-05 02:29:40.781552 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20251130.0 9beff03cb77b 5 weeks ago 1.23GB 2026-01-05 02:29:40.781559 | orchestrator | registry.osism.tech/osism/osism 0.20251130.1 95213af683ec 5 weeks ago 383MB 2026-01-05 02:29:40.781565 | orchestrator | registry.osism.tech/osism/osism-frontend 0.20251130.1 2cb6e7609620 5 weeks ago 238MB 2026-01-05 02:29:40.781571 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 7 weeks ago 334MB 2026-01-05 02:29:40.781578 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine 13105d2858de 2 months ago 41.4MB 2026-01-05 02:29:40.781584 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 2 months ago 742MB 2026-01-05 02:29:40.781591 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 4 months ago 275MB 2026-01-05 02:29:40.781597 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 5 months ago 226MB 2026-01-05 02:29:40.781614 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 8 months ago 453MB 2026-01-05 02:29:40.781628 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 19 months ago 146MB 2026-01-05 02:29:40.781635 | orchestrator | registry.osism.tech/osism/cgit 1.2.3 16e7285642b1 2 years ago 545MB 2026-01-05 02:29:41.088318 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-01-05 02:29:41.089101 | orchestrator | ++ semver 9.5.0 5.0.0 2026-01-05 02:29:41.142234 | orchestrator | 2026-01-05 02:29:41.142344 | orchestrator | ## Containers @ testbed-node-0 2026-01-05 02:29:41.142359 | orchestrator | 2026-01-05 02:29:41.142369 | orchestrator | + [[ 1 -eq -1 ]] 2026-01-05 02:29:41.142379 | orchestrator | + echo 2026-01-05 02:29:41.142389 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-01-05 02:29:41.142398 | orchestrator | + echo 2026-01-05 02:29:41.142407 | orchestrator | + osism container testbed-node-0 ps 2026-01-05 02:29:43.600254 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-01-05 02:29:43.600371 | orchestrator | 757becf1af6b registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) magnum_conductor 2026-01-05 02:29:43.600410 | orchestrator | 64dad78c1f7e registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) magnum_api 2026-01-05 02:29:43.600427 | orchestrator | 9dbb6ece8cf6 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2026-01-05 02:29:43.600441 | orchestrator | 7a9bd8369796 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_elasticsearch_exporter 2026-01-05 02:29:43.600486 | orchestrator | 132bc0d346de registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_cadvisor 2026-01-05 02:29:43.600502 | orchestrator | 80c784ce9c8a registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-01-05 02:29:43.600523 | orchestrator | 6b6ab83622ca registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-01-05 02:29:43.600537 | orchestrator | 86774da0df32 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-01-05 02:29:43.600552 | orchestrator | 972572bbb78c registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-01-05 02:29:43.600567 | orchestrator | eaeb09bb4a0e registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_scheduler 2026-01-05 02:29:43.600581 | orchestrator | a708d9ad7dd9 registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_data 2026-01-05 02:29:43.600595 | orchestrator | ad31b0b896f9 registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-01-05 02:29:43.600609 | orchestrator | f404f64268e6 registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_notifier 2026-01-05 02:29:43.600677 | orchestrator | 39a3d49c5bb9 registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_listener 2026-01-05 02:29:43.600692 | orchestrator | 06cc26caaf18 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-01-05 02:29:43.600705 | orchestrator | 18d2865a648c registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 19 minutes ago Up 18 minutes (healthy) aodh_api 2026-01-05 02:29:43.600719 | orchestrator | 4a364f686f25 registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes ceilometer_central 2026-01-05 02:29:43.600775 | orchestrator | 433d89841beb registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) ceilometer_notification 2026-01-05 02:29:43.600790 | orchestrator | 815f1e183ebf registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_worker 2026-01-05 02:29:43.600830 | orchestrator | d4f2675b42b3 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_housekeeping 2026-01-05 02:29:43.600843 | orchestrator | 0614451abb5c registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_health_manager 2026-01-05 02:29:43.600855 | orchestrator | 0037e7b4f51b registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes octavia_driver_agent 2026-01-05 02:29:43.600876 | orchestrator | e450f091fb65 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_api 2026-01-05 02:29:43.600888 | orchestrator | d20bd9693203 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_worker 2026-01-05 02:29:43.600900 | orchestrator | 0c9308598f32 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_mdns 2026-01-05 02:29:43.600917 | orchestrator | 7549d504e1aa registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_producer 2026-01-05 02:29:43.600929 | orchestrator | 092a81983281 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_central 2026-01-05 02:29:43.601271 | orchestrator | b420ecab0193 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_api 2026-01-05 02:29:43.601290 | orchestrator | 1e97682a10e1 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_backend_bind9 2026-01-05 02:29:43.601302 | orchestrator | a34b39667bfc registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_worker 2026-01-05 02:29:43.601315 | orchestrator | 5e824ba3b3cd registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_keystone_listener 2026-01-05 02:29:43.601327 | orchestrator | d84f2b7dd1d6 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_api 2026-01-05 02:29:43.601340 | orchestrator | 309532dab2c0 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_backup 2026-01-05 02:29:43.601352 | orchestrator | 9db7d1bf24b9 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_volume 2026-01-05 02:29:43.601365 | orchestrator | e533d2cb6ac3 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_scheduler 2026-01-05 02:29:43.601377 | orchestrator | 4e24c16d4190 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_api 2026-01-05 02:29:43.601389 | orchestrator | 7a5aa6c7e1cf registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) glance_api 2026-01-05 02:29:43.601402 | orchestrator | e63bb4901130 registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) skyline_console 2026-01-05 02:29:43.601414 | orchestrator | b9728665dc8b registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) skyline_apiserver 2026-01-05 02:29:43.601426 | orchestrator | e058029322a5 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) horizon 2026-01-05 02:29:43.601449 | orchestrator | 0399e5d69027 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_novncproxy 2026-01-05 02:29:43.601462 | orchestrator | 16dc60a6ef6c registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_conductor 2026-01-05 02:29:43.601480 | orchestrator | 1ef4a3c71538 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_api 2026-01-05 02:29:43.601492 | orchestrator | 89d39bc7ccf2 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_scheduler 2026-01-05 02:29:43.601505 | orchestrator | 45002f30053b registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 49 minutes ago Up 49 minutes (healthy) neutron_server 2026-01-05 02:29:43.601517 | orchestrator | 9b7b7280b66a registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 52 minutes ago Up 52 minutes (healthy) placement_api 2026-01-05 02:29:43.601529 | orchestrator | 6340e7b8de3d registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone 2026-01-05 02:29:43.601541 | orchestrator | 4711ebc8625d registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) keystone_fernet 2026-01-05 02:29:43.601561 | orchestrator | 385ac42c9041 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) keystone_ssh 2026-01-05 02:29:43.601573 | orchestrator | 44b42cd224df registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 57 minutes ago Up 57 minutes ceph-mgr-testbed-node-0 2026-01-05 02:29:43.601586 | orchestrator | ea5d85057924 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-0 2026-01-05 02:29:43.601598 | orchestrator | c181a7e52b5d registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-0 2026-01-05 02:29:43.601610 | orchestrator | 3e759b78b568 registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-01-05 02:29:43.601623 | orchestrator | db082783c8a7 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-01-05 02:29:43.601635 | orchestrator | 72e1917f2caa registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-01-05 02:29:43.601647 | orchestrator | 6ebaba442027 registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-01-05 02:29:43.601664 | orchestrator | 207302e7968f registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-01-05 02:29:43.601676 | orchestrator | 4b860511a225 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-01-05 02:29:43.601696 | orchestrator | 301ed6f79299 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-01-05 02:29:43.601708 | orchestrator | a4a5e8aa9462 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-01-05 02:29:43.601721 | orchestrator | 5c0909ae1229 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-01-05 02:29:43.601805 | orchestrator | aa0c0215a227 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-01-05 02:29:43.601818 | orchestrator | 5936b323e35d registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-01-05 02:29:43.601830 | orchestrator | c7b08d303dc4 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-01-05 02:29:43.601842 | orchestrator | 63e4084df651 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch 2026-01-05 02:29:43.601854 | orchestrator | a18470da1ddd registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours keepalived 2026-01-05 02:29:43.601866 | orchestrator | 41db02cb15e0 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-01-05 02:29:43.601877 | orchestrator | e7a74b0a3a70 registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-01-05 02:29:43.601897 | orchestrator | ceb0bf4a5cbb registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-01-05 02:29:43.601909 | orchestrator | 3647240aa329 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-01-05 02:29:43.601921 | orchestrator | 1a3e20ca5472 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-01-05 02:29:43.946558 | orchestrator | 2026-01-05 02:29:43.946675 | orchestrator | ## Images @ testbed-node-0 2026-01-05 02:29:43.946692 | orchestrator | 2026-01-05 02:29:43.946706 | orchestrator | + echo 2026-01-05 02:29:43.946718 | orchestrator | + echo '## Images @ testbed-node-0' 2026-01-05 02:29:43.946791 | orchestrator | + echo 2026-01-05 02:29:43.946822 | orchestrator | + osism container testbed-node-0 images 2026-01-05 02:29:46.467242 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-01-05 02:29:46.467369 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 5 weeks ago 322MB 2026-01-05 02:29:46.467384 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 5 weeks ago 266MB 2026-01-05 02:29:46.467390 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 5 weeks ago 1.56GB 2026-01-05 02:29:46.467396 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 5 weeks ago 1.53GB 2026-01-05 02:29:46.467426 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 5 weeks ago 276MB 2026-01-05 02:29:46.467432 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 5 weeks ago 669MB 2026-01-05 02:29:46.467438 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 5 weeks ago 265MB 2026-01-05 02:29:46.467445 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 5 weeks ago 1.02GB 2026-01-05 02:29:46.467451 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 5 weeks ago 412MB 2026-01-05 02:29:46.467457 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 5 weeks ago 274MB 2026-01-05 02:29:46.467463 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 5 weeks ago 578MB 2026-01-05 02:29:46.467470 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 5 weeks ago 273MB 2026-01-05 02:29:46.467476 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 5 weeks ago 273MB 2026-01-05 02:29:46.467482 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 5 weeks ago 452MB 2026-01-05 02:29:46.467488 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 5 weeks ago 1.15GB 2026-01-05 02:29:46.467494 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 5 weeks ago 301MB 2026-01-05 02:29:46.467500 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 5 weeks ago 298MB 2026-01-05 02:29:46.467506 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 5 weeks ago 357MB 2026-01-05 02:29:46.467511 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 5 weeks ago 292MB 2026-01-05 02:29:46.467518 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 5 weeks ago 305MB 2026-01-05 02:29:46.467526 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 5 weeks ago 279MB 2026-01-05 02:29:46.467533 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 5 weeks ago 975MB 2026-01-05 02:29:46.467540 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 5 weeks ago 279MB 2026-01-05 02:29:46.467546 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 5 weeks ago 1.37GB 2026-01-05 02:29:46.467553 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 5 weeks ago 1.21GB 2026-01-05 02:29:46.467560 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 5 weeks ago 1.21GB 2026-01-05 02:29:46.467567 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 5 weeks ago 1.21GB 2026-01-05 02:29:46.467579 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 5 weeks ago 976MB 2026-01-05 02:29:46.467586 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 5 weeks ago 976MB 2026-01-05 02:29:46.467593 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 5 weeks ago 1.13GB 2026-01-05 02:29:46.467607 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 5 weeks ago 1.24GB 2026-01-05 02:29:46.467632 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 5 weeks ago 1.22GB 2026-01-05 02:29:46.467638 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 5 weeks ago 1.06GB 2026-01-05 02:29:46.467644 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 5 weeks ago 1.05GB 2026-01-05 02:29:46.467649 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 5 weeks ago 1.05GB 2026-01-05 02:29:46.467655 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 5 weeks ago 974MB 2026-01-05 02:29:46.467661 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 5 weeks ago 974MB 2026-01-05 02:29:46.467668 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 5 weeks ago 974MB 2026-01-05 02:29:46.467674 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 5 weeks ago 973MB 2026-01-05 02:29:46.467680 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 5 weeks ago 991MB 2026-01-05 02:29:46.467687 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 5 weeks ago 991MB 2026-01-05 02:29:46.467693 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 5 weeks ago 990MB 2026-01-05 02:29:46.467699 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 5 weeks ago 1.09GB 2026-01-05 02:29:46.467704 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 5 weeks ago 1.04GB 2026-01-05 02:29:46.467711 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 5 weeks ago 1.04GB 2026-01-05 02:29:46.467716 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 5 weeks ago 1.03GB 2026-01-05 02:29:46.467722 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 5 weeks ago 1.03GB 2026-01-05 02:29:46.467728 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 5 weeks ago 1.05GB 2026-01-05 02:29:46.467822 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 5 weeks ago 1.03GB 2026-01-05 02:29:46.467831 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 5 weeks ago 1.05GB 2026-01-05 02:29:46.467837 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 5 weeks ago 1.16GB 2026-01-05 02:29:46.467843 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 5 weeks ago 1.1GB 2026-01-05 02:29:46.467849 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 5 weeks ago 983MB 2026-01-05 02:29:46.467855 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 5 weeks ago 989MB 2026-01-05 02:29:46.467861 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 5 weeks ago 984MB 2026-01-05 02:29:46.467867 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 5 weeks ago 984MB 2026-01-05 02:29:46.467882 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 5 weeks ago 989MB 2026-01-05 02:29:46.467888 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 5 weeks ago 984MB 2026-01-05 02:29:46.467901 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 5 weeks ago 1.05GB 2026-01-05 02:29:46.467907 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 5 weeks ago 990MB 2026-01-05 02:29:46.467914 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 5 weeks ago 1.72GB 2026-01-05 02:29:46.467920 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 5 weeks ago 1.4GB 2026-01-05 02:29:46.467927 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 5 weeks ago 1.41GB 2026-01-05 02:29:46.467942 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 5 weeks ago 1.4GB 2026-01-05 02:29:46.467948 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 5 weeks ago 840MB 2026-01-05 02:29:46.467954 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 5 weeks ago 840MB 2026-01-05 02:29:46.467960 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 5 weeks ago 840MB 2026-01-05 02:29:46.467966 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 5 weeks ago 840MB 2026-01-05 02:29:46.467972 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 8 months ago 1.27GB 2026-01-05 02:29:46.805062 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-01-05 02:29:46.805171 | orchestrator | ++ semver 9.5.0 5.0.0 2026-01-05 02:29:46.864348 | orchestrator | 2026-01-05 02:29:46.864462 | orchestrator | ## Containers @ testbed-node-1 2026-01-05 02:29:46.864491 | orchestrator | 2026-01-05 02:29:46.864508 | orchestrator | + [[ 1 -eq -1 ]] 2026-01-05 02:29:46.864523 | orchestrator | + echo 2026-01-05 02:29:46.864538 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-01-05 02:29:46.864553 | orchestrator | + echo 2026-01-05 02:29:46.864570 | orchestrator | + osism container testbed-node-1 ps 2026-01-05 02:29:49.317510 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-01-05 02:29:49.317582 | orchestrator | 02959e5d28aa registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) magnum_conductor 2026-01-05 02:29:49.317590 | orchestrator | bd87543bdf0a registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) magnum_api 2026-01-05 02:29:49.317596 | orchestrator | 59bd6385e05a registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2026-01-05 02:29:49.317601 | orchestrator | 84e342ff572e registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_elasticsearch_exporter 2026-01-05 02:29:49.317608 | orchestrator | 92462d6b2703 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_cadvisor 2026-01-05 02:29:49.317612 | orchestrator | 8af82ccff3b4 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-01-05 02:29:49.317629 | orchestrator | a4df010eb6c6 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-01-05 02:29:49.317634 | orchestrator | 99361ae6c018 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 11 minutes ago Up 10 minutes prometheus_node_exporter 2026-01-05 02:29:49.317638 | orchestrator | 216da1cb928c registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 14 minutes (healthy) manila_share 2026-01-05 02:29:49.317642 | orchestrator | 26f1cf9cd05a registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_scheduler 2026-01-05 02:29:49.317646 | orchestrator | 859882b6cb35 registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_data 2026-01-05 02:29:49.317650 | orchestrator | 7852e3c9ed42 registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-01-05 02:29:49.317658 | orchestrator | 52b08cc9d4e5 registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_notifier 2026-01-05 02:29:49.317662 | orchestrator | 46391c829785 registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_listener 2026-01-05 02:29:49.317666 | orchestrator | 431f3730ea46 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-01-05 02:29:49.317670 | orchestrator | 9271550bbb37 registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) aodh_api 2026-01-05 02:29:49.317674 | orchestrator | 34465262c6ac registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes ceilometer_central 2026-01-05 02:29:49.317678 | orchestrator | 807cbc144abd registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) ceilometer_notification 2026-01-05 02:29:49.317682 | orchestrator | 8f006ac34bc8 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_worker 2026-01-05 02:29:49.317694 | orchestrator | b6026273e482 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_housekeeping 2026-01-05 02:29:49.317699 | orchestrator | a794cbf70714 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_health_manager 2026-01-05 02:29:49.317702 | orchestrator | f636607060ba registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes octavia_driver_agent 2026-01-05 02:29:49.317706 | orchestrator | 4074d92ae443 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_api 2026-01-05 02:29:49.317710 | orchestrator | b5e4a5c321b6 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_worker 2026-01-05 02:29:49.317717 | orchestrator | 5391960dbaeb registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_mdns 2026-01-05 02:29:49.317721 | orchestrator | 39b87a4b1ae1 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_producer 2026-01-05 02:29:49.317725 | orchestrator | c79f75666da6 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_central 2026-01-05 02:29:49.317729 | orchestrator | f53bf12ac16c registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_api 2026-01-05 02:29:49.317733 | orchestrator | 0ba370f2b539 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_backend_bind9 2026-01-05 02:29:49.317737 | orchestrator | 40b333d15e83 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_worker 2026-01-05 02:29:49.317741 | orchestrator | c2cb996a4c38 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_keystone_listener 2026-01-05 02:29:49.317784 | orchestrator | 62659e21ffa7 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_api 2026-01-05 02:29:49.317788 | orchestrator | f0fd132a6de4 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_backup 2026-01-05 02:29:49.317792 | orchestrator | ee91555d80d3 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_volume 2026-01-05 02:29:49.317796 | orchestrator | e85b3ac4fa48 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_scheduler 2026-01-05 02:29:49.317800 | orchestrator | d798d15acabd registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_api 2026-01-05 02:29:49.317807 | orchestrator | 058a2ce0c607 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) glance_api 2026-01-05 02:29:49.317811 | orchestrator | f5a6f0af5aa0 registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) skyline_console 2026-01-05 02:29:49.317815 | orchestrator | 3bcc6e5bf7ce registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) skyline_apiserver 2026-01-05 02:29:49.317822 | orchestrator | cd8b3895f592 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) horizon 2026-01-05 02:29:49.317827 | orchestrator | 2850bd06e3ef registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_novncproxy 2026-01-05 02:29:49.317834 | orchestrator | 1c9b2fd28b2e registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_conductor 2026-01-05 02:29:49.317838 | orchestrator | 2f3f9a1ab22e registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_api 2026-01-05 02:29:49.317842 | orchestrator | 09d97acd6c65 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_scheduler 2026-01-05 02:29:49.317846 | orchestrator | f28289d8f2ff registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 49 minutes ago Up 49 minutes (healthy) neutron_server 2026-01-05 02:29:49.317850 | orchestrator | 98f9065f5c97 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 52 minutes ago Up 52 minutes (healthy) placement_api 2026-01-05 02:29:49.317853 | orchestrator | e899b89fe19b registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) keystone 2026-01-05 02:29:49.317857 | orchestrator | 53dc8af1f91b registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) keystone_fernet 2026-01-05 02:29:49.317861 | orchestrator | 0a168d483b19 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) keystone_ssh 2026-01-05 02:29:49.317865 | orchestrator | 7f343a7cd6f7 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 57 minutes ago Up 57 minutes ceph-mgr-testbed-node-1 2026-01-05 02:29:49.317870 | orchestrator | 48b3f4557709 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-1 2026-01-05 02:29:49.317874 | orchestrator | 8220df20b331 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-1 2026-01-05 02:29:49.317878 | orchestrator | 94bb9894c76c registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-01-05 02:29:49.317882 | orchestrator | 14674feb860e registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-01-05 02:29:49.317885 | orchestrator | 3e3fb3b7e1a3 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-01-05 02:29:49.317889 | orchestrator | bc2ced342de4 registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-01-05 02:29:49.317893 | orchestrator | 7ba3ede1face registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-01-05 02:29:49.317897 | orchestrator | 0c99d56d81cc registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-01-05 02:29:49.317901 | orchestrator | d097c0202f01 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-01-05 02:29:49.317909 | orchestrator | 500180570e0c registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-01-05 02:29:49.317914 | orchestrator | 72ee01d5c0cf registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-01-05 02:29:49.317918 | orchestrator | 036915d831e8 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-01-05 02:29:49.317922 | orchestrator | a4ba059d7dce registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-01-05 02:29:49.317925 | orchestrator | bbed4e798e02 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-01-05 02:29:49.317931 | orchestrator | 2cbfa1738f58 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch 2026-01-05 02:29:49.317935 | orchestrator | 6b3248d27571 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours keepalived 2026-01-05 02:29:49.317939 | orchestrator | 36ecbcb97366 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-01-05 02:29:49.317943 | orchestrator | 3273dae0daf8 registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-01-05 02:29:49.317947 | orchestrator | 3bfd55963397 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-01-05 02:29:49.317953 | orchestrator | 72ee7540356d registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-01-05 02:29:49.317957 | orchestrator | 1ac6410671b4 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-01-05 02:29:49.673725 | orchestrator | 2026-01-05 02:29:49.673860 | orchestrator | ## Images @ testbed-node-1 2026-01-05 02:29:49.673871 | orchestrator | 2026-01-05 02:29:49.673877 | orchestrator | + echo 2026-01-05 02:29:49.673884 | orchestrator | + echo '## Images @ testbed-node-1' 2026-01-05 02:29:49.673891 | orchestrator | + echo 2026-01-05 02:29:49.673896 | orchestrator | + osism container testbed-node-1 images 2026-01-05 02:29:52.120455 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-01-05 02:29:52.120563 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 5 weeks ago 322MB 2026-01-05 02:29:52.120572 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 5 weeks ago 266MB 2026-01-05 02:29:52.120577 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 5 weeks ago 1.56GB 2026-01-05 02:29:52.120582 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 5 weeks ago 276MB 2026-01-05 02:29:52.120586 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 5 weeks ago 1.53GB 2026-01-05 02:29:52.120590 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 5 weeks ago 669MB 2026-01-05 02:29:52.120611 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 5 weeks ago 265MB 2026-01-05 02:29:52.120617 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 5 weeks ago 1.02GB 2026-01-05 02:29:52.120624 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 5 weeks ago 412MB 2026-01-05 02:29:52.120629 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 5 weeks ago 274MB 2026-01-05 02:29:52.120635 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 5 weeks ago 578MB 2026-01-05 02:29:52.120641 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 5 weeks ago 273MB 2026-01-05 02:29:52.120647 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 5 weeks ago 273MB 2026-01-05 02:29:52.120653 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 5 weeks ago 452MB 2026-01-05 02:29:52.120660 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 5 weeks ago 1.15GB 2026-01-05 02:29:52.120666 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 5 weeks ago 301MB 2026-01-05 02:29:52.120672 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 5 weeks ago 298MB 2026-01-05 02:29:52.120678 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 5 weeks ago 357MB 2026-01-05 02:29:52.120684 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 5 weeks ago 292MB 2026-01-05 02:29:52.120689 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 5 weeks ago 305MB 2026-01-05 02:29:52.120695 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 5 weeks ago 279MB 2026-01-05 02:29:52.120701 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 5 weeks ago 975MB 2026-01-05 02:29:52.120707 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 5 weeks ago 279MB 2026-01-05 02:29:52.120713 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 5 weeks ago 1.37GB 2026-01-05 02:29:52.120719 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 5 weeks ago 1.21GB 2026-01-05 02:29:52.120724 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 5 weeks ago 1.21GB 2026-01-05 02:29:52.120730 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 5 weeks ago 1.21GB 2026-01-05 02:29:52.120736 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 5 weeks ago 976MB 2026-01-05 02:29:52.120742 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 5 weeks ago 976MB 2026-01-05 02:29:52.120748 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 5 weeks ago 1.13GB 2026-01-05 02:29:52.120814 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 5 weeks ago 1.24GB 2026-01-05 02:29:52.120833 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 5 weeks ago 1.22GB 2026-01-05 02:29:52.120847 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 5 weeks ago 1.06GB 2026-01-05 02:29:52.120853 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 5 weeks ago 1.05GB 2026-01-05 02:29:52.120859 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 5 weeks ago 1.05GB 2026-01-05 02:29:52.120864 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 5 weeks ago 974MB 2026-01-05 02:29:52.120870 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 5 weeks ago 974MB 2026-01-05 02:29:52.120900 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 5 weeks ago 974MB 2026-01-05 02:29:52.120906 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 5 weeks ago 973MB 2026-01-05 02:29:52.120912 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 5 weeks ago 991MB 2026-01-05 02:29:52.120918 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 5 weeks ago 991MB 2026-01-05 02:29:52.120924 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 5 weeks ago 990MB 2026-01-05 02:29:52.120929 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 5 weeks ago 1.09GB 2026-01-05 02:29:52.120935 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 5 weeks ago 1.04GB 2026-01-05 02:29:52.120941 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 5 weeks ago 1.04GB 2026-01-05 02:29:52.120947 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 5 weeks ago 1.03GB 2026-01-05 02:29:52.120953 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 5 weeks ago 1.03GB 2026-01-05 02:29:52.120958 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 5 weeks ago 1.05GB 2026-01-05 02:29:52.120964 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 5 weeks ago 1.03GB 2026-01-05 02:29:52.120970 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 5 weeks ago 1.05GB 2026-01-05 02:29:52.120976 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 5 weeks ago 1.16GB 2026-01-05 02:29:52.120981 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 5 weeks ago 1.1GB 2026-01-05 02:29:52.120987 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 5 weeks ago 983MB 2026-01-05 02:29:52.120992 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 5 weeks ago 989MB 2026-01-05 02:29:52.120997 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 5 weeks ago 984MB 2026-01-05 02:29:52.121004 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 5 weeks ago 984MB 2026-01-05 02:29:52.121010 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 5 weeks ago 989MB 2026-01-05 02:29:52.121015 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 5 weeks ago 984MB 2026-01-05 02:29:52.121021 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 5 weeks ago 1.05GB 2026-01-05 02:29:52.121034 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 5 weeks ago 990MB 2026-01-05 02:29:52.121040 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 5 weeks ago 1.72GB 2026-01-05 02:29:52.121046 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 5 weeks ago 1.4GB 2026-01-05 02:29:52.121053 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 5 weeks ago 1.41GB 2026-01-05 02:29:52.121063 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 5 weeks ago 1.4GB 2026-01-05 02:29:52.121070 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 5 weeks ago 840MB 2026-01-05 02:29:52.121077 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 5 weeks ago 840MB 2026-01-05 02:29:52.121083 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 5 weeks ago 840MB 2026-01-05 02:29:52.121089 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 5 weeks ago 840MB 2026-01-05 02:29:52.121096 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 8 months ago 1.27GB 2026-01-05 02:29:52.461868 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-01-05 02:29:52.462671 | orchestrator | ++ semver 9.5.0 5.0.0 2026-01-05 02:29:52.520303 | orchestrator | 2026-01-05 02:29:52.520420 | orchestrator | ## Containers @ testbed-node-2 2026-01-05 02:29:52.520442 | orchestrator | 2026-01-05 02:29:52.520458 | orchestrator | + [[ 1 -eq -1 ]] 2026-01-05 02:29:52.520473 | orchestrator | + echo 2026-01-05 02:29:52.520490 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-01-05 02:29:52.520507 | orchestrator | + echo 2026-01-05 02:29:52.520523 | orchestrator | + osism container testbed-node-2 ps 2026-01-05 02:29:55.040687 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-01-05 02:29:55.040809 | orchestrator | 49e8dbc263c4 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) magnum_conductor 2026-01-05 02:29:55.040820 | orchestrator | c84b2286519e registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) magnum_api 2026-01-05 02:29:55.040826 | orchestrator | 2e2fc0fefdbf registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2026-01-05 02:29:55.040831 | orchestrator | 93af9747f38b registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_elasticsearch_exporter 2026-01-05 02:29:55.040838 | orchestrator | 1c4d9fe0a9b4 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_cadvisor 2026-01-05 02:29:55.040843 | orchestrator | 167c43f72d50 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-01-05 02:29:55.040861 | orchestrator | 18a7ef5f253e registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-01-05 02:29:55.040882 | orchestrator | 7195e667afb7 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_node_exporter 2026-01-05 02:29:55.040915 | orchestrator | b74e4bc5e59c registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_share 2026-01-05 02:29:55.040924 | orchestrator | 0bde1ef48403 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_scheduler 2026-01-05 02:29:55.040931 | orchestrator | 3d41b0c89471 registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_data 2026-01-05 02:29:55.040936 | orchestrator | b40ebd0299d2 registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-01-05 02:29:55.040957 | orchestrator | 92eccf8e0a5d registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_notifier 2026-01-05 02:29:55.040962 | orchestrator | 1737ffb9fcd7 registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_listener 2026-01-05 02:29:55.040967 | orchestrator | e394d016c125 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 19 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-01-05 02:29:55.040972 | orchestrator | 8e7422d0644c registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) aodh_api 2026-01-05 02:29:55.040977 | orchestrator | 4926f6e6ab4f registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes ceilometer_central 2026-01-05 02:29:55.040982 | orchestrator | ba5b1cfa9fc0 registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) ceilometer_notification 2026-01-05 02:29:55.040986 | orchestrator | 9bcdeac249e2 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_worker 2026-01-05 02:29:55.041007 | orchestrator | 99b69c499aca registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_housekeeping 2026-01-05 02:29:55.041012 | orchestrator | 7a24fc33cb0f registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_health_manager 2026-01-05 02:29:55.041017 | orchestrator | f0252f69c5fc registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes octavia_driver_agent 2026-01-05 02:29:55.041022 | orchestrator | 11e8569f677a registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_api 2026-01-05 02:29:55.041026 | orchestrator | c023528663ad registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_worker 2026-01-05 02:29:55.041031 | orchestrator | b4c21136f784 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_mdns 2026-01-05 02:29:55.041041 | orchestrator | 3cebe4e515ad registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_producer 2026-01-05 02:29:55.041046 | orchestrator | 07175dc3df84 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_central 2026-01-05 02:29:55.041050 | orchestrator | e279405a5441 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_api 2026-01-05 02:29:55.041055 | orchestrator | d9112252932a registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_backend_bind9 2026-01-05 02:29:55.041060 | orchestrator | 2817e2d63622 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_worker 2026-01-05 02:29:55.041065 | orchestrator | 7e9d34aef15f registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_keystone_listener 2026-01-05 02:29:55.041070 | orchestrator | 0ba5041b08df registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_api 2026-01-05 02:29:55.041074 | orchestrator | c06ac74e5151 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_backup 2026-01-05 02:29:55.041079 | orchestrator | 52789f462360 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_volume 2026-01-05 02:29:55.041084 | orchestrator | 900eb46ced91 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_scheduler 2026-01-05 02:29:55.041088 | orchestrator | 8801a2d45be2 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_api 2026-01-05 02:29:55.041093 | orchestrator | db03f6ac8936 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) glance_api 2026-01-05 02:29:55.041098 | orchestrator | 40bf07905103 registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) skyline_console 2026-01-05 02:29:55.041103 | orchestrator | 51e8032a2933 registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) skyline_apiserver 2026-01-05 02:29:55.041112 | orchestrator | d8ac40324055 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) horizon 2026-01-05 02:29:55.041117 | orchestrator | c4089156d4a8 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_novncproxy 2026-01-05 02:29:55.041122 | orchestrator | e84fae11d8ee registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_conductor 2026-01-05 02:29:55.041126 | orchestrator | c3cce72c1264 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_api 2026-01-05 02:29:55.041138 | orchestrator | 3f01dc7c094e registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_scheduler 2026-01-05 02:29:55.041143 | orchestrator | b00fa39dab85 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 49 minutes ago Up 49 minutes (healthy) neutron_server 2026-01-05 02:29:55.041148 | orchestrator | 092b438a1bd4 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 52 minutes ago Up 52 minutes (healthy) placement_api 2026-01-05 02:29:55.041152 | orchestrator | f39437ec7053 registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) keystone 2026-01-05 02:29:55.041157 | orchestrator | e5530f8f06a0 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) keystone_fernet 2026-01-05 02:29:55.041162 | orchestrator | 09800c66aadd registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) keystone_ssh 2026-01-05 02:29:55.041166 | orchestrator | de3d2c9ca124 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 57 minutes ago Up 57 minutes ceph-mgr-testbed-node-2 2026-01-05 02:29:55.041171 | orchestrator | 9b701c73c6d9 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-2 2026-01-05 02:29:55.041179 | orchestrator | 62bad87f6045 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-2 2026-01-05 02:29:55.041184 | orchestrator | e471df488bdc registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-01-05 02:29:55.041192 | orchestrator | edec5cc48016 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-01-05 02:29:55.041197 | orchestrator | b0635523e799 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-01-05 02:29:55.041202 | orchestrator | f86fd57fe34a registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-01-05 02:29:55.041206 | orchestrator | 64be2b3672ad registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-01-05 02:29:55.041211 | orchestrator | 91c7cf8fb2c5 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-01-05 02:29:55.041217 | orchestrator | c4bbddd69762 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-01-05 02:29:55.041226 | orchestrator | db42a3c4276b registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-01-05 02:29:55.041232 | orchestrator | cac7f6daa76f registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-01-05 02:29:55.041242 | orchestrator | 8f3fe22c0039 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-01-05 02:29:55.041247 | orchestrator | d52ab2f55a43 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-01-05 02:29:55.041253 | orchestrator | 712fa88aaff3 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-01-05 02:29:55.041258 | orchestrator | b6ad42a98273 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch 2026-01-05 02:29:55.041264 | orchestrator | 1731cf027e56 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours keepalived 2026-01-05 02:29:55.041270 | orchestrator | fa19156bf8e7 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-01-05 02:29:55.041275 | orchestrator | 2b1c06068afb registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-01-05 02:29:55.041281 | orchestrator | 5c32ee1eb0b3 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-01-05 02:29:55.041286 | orchestrator | 62edba74aab6 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-01-05 02:29:55.041292 | orchestrator | 212f271d14b7 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-01-05 02:29:55.425095 | orchestrator | 2026-01-05 02:29:55.425189 | orchestrator | ## Images @ testbed-node-2 2026-01-05 02:29:55.425199 | orchestrator | 2026-01-05 02:29:55.425205 | orchestrator | + echo 2026-01-05 02:29:55.425210 | orchestrator | + echo '## Images @ testbed-node-2' 2026-01-05 02:29:55.425215 | orchestrator | + echo 2026-01-05 02:29:55.425219 | orchestrator | + osism container testbed-node-2 images 2026-01-05 02:29:57.972089 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-01-05 02:29:57.972191 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 5 weeks ago 322MB 2026-01-05 02:29:57.972200 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 5 weeks ago 266MB 2026-01-05 02:29:57.972207 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 5 weeks ago 1.56GB 2026-01-05 02:29:57.972228 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 5 weeks ago 276MB 2026-01-05 02:29:57.972235 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 5 weeks ago 1.53GB 2026-01-05 02:29:57.972242 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 5 weeks ago 669MB 2026-01-05 02:29:57.972249 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 5 weeks ago 265MB 2026-01-05 02:29:57.972256 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 5 weeks ago 1.02GB 2026-01-05 02:29:57.972281 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 5 weeks ago 412MB 2026-01-05 02:29:57.972288 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 5 weeks ago 274MB 2026-01-05 02:29:57.972300 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 5 weeks ago 578MB 2026-01-05 02:29:57.972307 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 5 weeks ago 273MB 2026-01-05 02:29:57.972314 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 5 weeks ago 273MB 2026-01-05 02:29:57.972321 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 5 weeks ago 452MB 2026-01-05 02:29:57.972327 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 5 weeks ago 1.15GB 2026-01-05 02:29:57.972335 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 5 weeks ago 301MB 2026-01-05 02:29:57.972341 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 5 weeks ago 298MB 2026-01-05 02:29:57.972345 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 5 weeks ago 357MB 2026-01-05 02:29:57.972350 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 5 weeks ago 292MB 2026-01-05 02:29:57.972354 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 5 weeks ago 305MB 2026-01-05 02:29:57.972359 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 5 weeks ago 279MB 2026-01-05 02:29:57.972363 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 5 weeks ago 975MB 2026-01-05 02:29:57.972368 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 5 weeks ago 279MB 2026-01-05 02:29:57.972372 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 5 weeks ago 1.37GB 2026-01-05 02:29:57.972376 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 5 weeks ago 1.21GB 2026-01-05 02:29:57.972381 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 5 weeks ago 1.21GB 2026-01-05 02:29:57.972385 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 5 weeks ago 1.21GB 2026-01-05 02:29:57.972390 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 5 weeks ago 976MB 2026-01-05 02:29:57.972394 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 5 weeks ago 976MB 2026-01-05 02:29:57.972398 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 5 weeks ago 1.13GB 2026-01-05 02:29:57.972403 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 5 weeks ago 1.24GB 2026-01-05 02:29:57.972421 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 5 weeks ago 1.22GB 2026-01-05 02:29:57.972426 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 5 weeks ago 1.06GB 2026-01-05 02:29:57.972430 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 5 weeks ago 1.05GB 2026-01-05 02:29:57.972435 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 5 weeks ago 1.05GB 2026-01-05 02:29:57.972446 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 5 weeks ago 974MB 2026-01-05 02:29:57.972450 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 5 weeks ago 974MB 2026-01-05 02:29:57.972455 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 5 weeks ago 974MB 2026-01-05 02:29:57.972465 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 5 weeks ago 973MB 2026-01-05 02:29:57.972470 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 5 weeks ago 991MB 2026-01-05 02:29:57.972475 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 5 weeks ago 991MB 2026-01-05 02:29:57.972479 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 5 weeks ago 990MB 2026-01-05 02:29:57.972484 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 5 weeks ago 1.09GB 2026-01-05 02:29:57.972488 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 5 weeks ago 1.04GB 2026-01-05 02:29:57.972493 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 5 weeks ago 1.04GB 2026-01-05 02:29:57.972497 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 5 weeks ago 1.03GB 2026-01-05 02:29:57.972502 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 5 weeks ago 1.03GB 2026-01-05 02:29:57.972506 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 5 weeks ago 1.05GB 2026-01-05 02:29:57.972511 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 5 weeks ago 1.03GB 2026-01-05 02:29:57.972519 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 5 weeks ago 1.05GB 2026-01-05 02:29:57.972526 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 5 weeks ago 1.16GB 2026-01-05 02:29:57.972533 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 5 weeks ago 1.1GB 2026-01-05 02:29:57.972539 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 5 weeks ago 983MB 2026-01-05 02:29:57.972546 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 5 weeks ago 989MB 2026-01-05 02:29:57.972554 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 5 weeks ago 984MB 2026-01-05 02:29:57.972562 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 5 weeks ago 984MB 2026-01-05 02:29:57.972568 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 5 weeks ago 989MB 2026-01-05 02:29:57.972572 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 5 weeks ago 984MB 2026-01-05 02:29:57.972577 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 5 weeks ago 1.05GB 2026-01-05 02:29:57.972581 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 5 weeks ago 990MB 2026-01-05 02:29:57.972586 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 5 weeks ago 1.72GB 2026-01-05 02:29:57.972595 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 5 weeks ago 1.4GB 2026-01-05 02:29:57.972599 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 5 weeks ago 1.41GB 2026-01-05 02:29:57.972609 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 5 weeks ago 1.4GB 2026-01-05 02:29:57.972616 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 5 weeks ago 840MB 2026-01-05 02:29:57.972623 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 5 weeks ago 840MB 2026-01-05 02:29:57.972631 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 5 weeks ago 840MB 2026-01-05 02:29:57.972643 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 5 weeks ago 840MB 2026-01-05 02:29:57.972650 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 8 months ago 1.27GB 2026-01-05 02:29:58.303580 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-01-05 02:29:58.312684 | orchestrator | + set -e 2026-01-05 02:29:58.312769 | orchestrator | + source /opt/manager-vars.sh 2026-01-05 02:29:58.312805 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-05 02:29:58.312816 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-05 02:29:58.312826 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-05 02:29:58.312835 | orchestrator | ++ CEPH_VERSION=reef 2026-01-05 02:29:58.312845 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-05 02:29:58.312857 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-05 02:29:58.312868 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-01-05 02:29:58.312878 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-01-05 02:29:58.312889 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-05 02:29:58.312899 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-05 02:29:58.312910 | orchestrator | ++ export ARA=false 2026-01-05 02:29:58.312921 | orchestrator | ++ ARA=false 2026-01-05 02:29:58.312932 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-05 02:29:58.312942 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-05 02:29:58.312952 | orchestrator | ++ export TEMPEST=false 2026-01-05 02:29:58.312964 | orchestrator | ++ TEMPEST=false 2026-01-05 02:29:58.312971 | orchestrator | ++ export IS_ZUUL=true 2026-01-05 02:29:58.312977 | orchestrator | ++ IS_ZUUL=true 2026-01-05 02:29:58.312984 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.95 2026-01-05 02:29:58.312990 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.95 2026-01-05 02:29:58.312997 | orchestrator | ++ export EXTERNAL_API=false 2026-01-05 02:29:58.313221 | orchestrator | ++ EXTERNAL_API=false 2026-01-05 02:29:58.313238 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-05 02:29:58.313248 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-05 02:29:58.313259 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-05 02:29:58.313269 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-05 02:29:58.313279 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-05 02:29:58.313289 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-05 02:29:58.313300 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-01-05 02:29:58.313311 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-01-05 02:29:58.324698 | orchestrator | + set -e 2026-01-05 02:29:58.324791 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-05 02:29:58.324804 | orchestrator | ++ export INTERACTIVE=false 2026-01-05 02:29:58.324817 | orchestrator | ++ INTERACTIVE=false 2026-01-05 02:29:58.324828 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-05 02:29:58.324838 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-05 02:29:58.324848 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-01-05 02:29:58.326333 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-01-05 02:29:58.333412 | orchestrator | 2026-01-05 02:29:58.333475 | orchestrator | # Ceph status 2026-01-05 02:29:58.333484 | orchestrator | 2026-01-05 02:29:58.333492 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-01-05 02:29:58.333501 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-01-05 02:29:58.333509 | orchestrator | + echo 2026-01-05 02:29:58.333517 | orchestrator | + echo '# Ceph status' 2026-01-05 02:29:58.333551 | orchestrator | + echo 2026-01-05 02:29:58.333559 | orchestrator | + ceph -s 2026-01-05 02:29:58.964715 | orchestrator | cluster: 2026-01-05 02:29:58.964910 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-01-05 02:29:58.964930 | orchestrator | health: HEALTH_OK 2026-01-05 02:29:58.964943 | orchestrator | 2026-01-05 02:29:58.964954 | orchestrator | services: 2026-01-05 02:29:58.964966 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 70m) 2026-01-05 02:29:58.964995 | orchestrator | mgr: testbed-node-0(active, since 57m), standbys: testbed-node-1, testbed-node-2 2026-01-05 02:29:58.965010 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-01-05 02:29:58.965022 | orchestrator | osd: 6 osds: 6 up (since 66m), 6 in (since 67m) 2026-01-05 02:29:58.965036 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-01-05 02:29:58.965049 | orchestrator | 2026-01-05 02:29:58.965061 | orchestrator | data: 2026-01-05 02:29:58.965072 | orchestrator | volumes: 1/1 healthy 2026-01-05 02:29:58.965080 | orchestrator | pools: 14 pools, 401 pgs 2026-01-05 02:29:58.965088 | orchestrator | objects: 556 objects, 2.2 GiB 2026-01-05 02:29:58.965095 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2026-01-05 02:29:58.965103 | orchestrator | pgs: 401 active+clean 2026-01-05 02:29:58.965111 | orchestrator | 2026-01-05 02:29:59.011542 | orchestrator | 2026-01-05 02:29:59.011692 | orchestrator | # Ceph versions 2026-01-05 02:29:59.011710 | orchestrator | 2026-01-05 02:29:59.011719 | orchestrator | + echo 2026-01-05 02:29:59.011728 | orchestrator | + echo '# Ceph versions' 2026-01-05 02:29:59.011738 | orchestrator | + echo 2026-01-05 02:29:59.011747 | orchestrator | + ceph versions 2026-01-05 02:29:59.639411 | orchestrator | { 2026-01-05 02:29:59.639529 | orchestrator | "mon": { 2026-01-05 02:29:59.639544 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-01-05 02:29:59.639555 | orchestrator | }, 2026-01-05 02:29:59.639564 | orchestrator | "mgr": { 2026-01-05 02:29:59.639574 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-01-05 02:29:59.639582 | orchestrator | }, 2026-01-05 02:29:59.639591 | orchestrator | "osd": { 2026-01-05 02:29:59.639601 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2026-01-05 02:29:59.639607 | orchestrator | }, 2026-01-05 02:29:59.639612 | orchestrator | "mds": { 2026-01-05 02:29:59.639617 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-01-05 02:29:59.639623 | orchestrator | }, 2026-01-05 02:29:59.639628 | orchestrator | "rgw": { 2026-01-05 02:29:59.639633 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-01-05 02:29:59.639639 | orchestrator | }, 2026-01-05 02:29:59.639644 | orchestrator | "overall": { 2026-01-05 02:29:59.639650 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2026-01-05 02:29:59.639655 | orchestrator | } 2026-01-05 02:29:59.639660 | orchestrator | } 2026-01-05 02:29:59.683834 | orchestrator | 2026-01-05 02:29:59.683923 | orchestrator | # Ceph OSD tree 2026-01-05 02:29:59.683935 | orchestrator | 2026-01-05 02:29:59.683944 | orchestrator | + echo 2026-01-05 02:29:59.683953 | orchestrator | + echo '# Ceph OSD tree' 2026-01-05 02:29:59.683962 | orchestrator | + echo 2026-01-05 02:29:59.683970 | orchestrator | + ceph osd df tree 2026-01-05 02:30:00.255454 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-01-05 02:30:00.255561 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 406 MiB 113 GiB 5.90 1.00 - root default 2026-01-05 02:30:00.255571 | orchestrator | -5 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 127 MiB 38 GiB 5.88 1.00 - host testbed-node-3 2026-01-05 02:30:00.255578 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1019 MiB 1 KiB 66 MiB 19 GiB 5.30 0.90 190 up osd.0 2026-01-05 02:30:00.255585 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 62 MiB 19 GiB 6.46 1.09 202 up osd.4 2026-01-05 02:30:00.255592 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 139 MiB 38 GiB 5.91 1.00 - host testbed-node-4 2026-01-05 02:30:00.255599 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 74 MiB 19 GiB 7.06 1.20 209 up osd.1 2026-01-05 02:30:00.255635 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 972 MiB 907 MiB 1 KiB 66 MiB 19 GiB 4.75 0.81 181 up osd.5 2026-01-05 02:30:00.255643 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 139 MiB 38 GiB 5.91 1.00 - host testbed-node-5 2026-01-05 02:30:00.255650 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.5 GiB 1.4 GiB 1 KiB 74 MiB 18 GiB 7.49 1.27 188 up osd.2 2026-01-05 02:30:00.255657 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 884 MiB 819 MiB 1 KiB 66 MiB 19 GiB 4.32 0.73 200 up osd.3 2026-01-05 02:30:00.255663 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 406 MiB 113 GiB 5.90 2026-01-05 02:30:00.255670 | orchestrator | MIN/MAX VAR: 0.73/1.27 STDDEV: 1.18 2026-01-05 02:30:00.305714 | orchestrator | 2026-01-05 02:30:00.305832 | orchestrator | # Ceph monitor status 2026-01-05 02:30:00.305841 | orchestrator | 2026-01-05 02:30:00.305846 | orchestrator | + echo 2026-01-05 02:30:00.305850 | orchestrator | + echo '# Ceph monitor status' 2026-01-05 02:30:00.305855 | orchestrator | + echo 2026-01-05 02:30:00.305859 | orchestrator | + ceph mon stat 2026-01-05 02:30:00.894630 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.8:3300/0,v1:192.168.16.8:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 4, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-01-05 02:30:00.937749 | orchestrator | 2026-01-05 02:30:00.937862 | orchestrator | # Ceph quorum status 2026-01-05 02:30:00.937874 | orchestrator | 2026-01-05 02:30:00.937882 | orchestrator | + echo 2026-01-05 02:30:00.937890 | orchestrator | + echo '# Ceph quorum status' 2026-01-05 02:30:00.937898 | orchestrator | + echo 2026-01-05 02:30:00.938154 | orchestrator | + ceph quorum_status 2026-01-05 02:30:00.938431 | orchestrator | + jq 2026-01-05 02:30:01.589006 | orchestrator | { 2026-01-05 02:30:01.589100 | orchestrator | "election_epoch": 4, 2026-01-05 02:30:01.589111 | orchestrator | "quorum": [ 2026-01-05 02:30:01.589119 | orchestrator | 0, 2026-01-05 02:30:01.589126 | orchestrator | 1, 2026-01-05 02:30:01.589133 | orchestrator | 2 2026-01-05 02:30:01.589140 | orchestrator | ], 2026-01-05 02:30:01.589148 | orchestrator | "quorum_names": [ 2026-01-05 02:30:01.589155 | orchestrator | "testbed-node-0", 2026-01-05 02:30:01.589162 | orchestrator | "testbed-node-1", 2026-01-05 02:30:01.589202 | orchestrator | "testbed-node-2" 2026-01-05 02:30:01.589209 | orchestrator | ], 2026-01-05 02:30:01.589217 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-01-05 02:30:01.589225 | orchestrator | "quorum_age": 4228, 2026-01-05 02:30:01.589232 | orchestrator | "features": { 2026-01-05 02:30:01.589239 | orchestrator | "quorum_con": "4540138322906710015", 2026-01-05 02:30:01.589246 | orchestrator | "quorum_mon": [ 2026-01-05 02:30:01.589253 | orchestrator | "kraken", 2026-01-05 02:30:01.589260 | orchestrator | "luminous", 2026-01-05 02:30:01.589267 | orchestrator | "mimic", 2026-01-05 02:30:01.589275 | orchestrator | "osdmap-prune", 2026-01-05 02:30:01.589286 | orchestrator | "nautilus", 2026-01-05 02:30:01.589297 | orchestrator | "octopus", 2026-01-05 02:30:01.589312 | orchestrator | "pacific", 2026-01-05 02:30:01.589328 | orchestrator | "elector-pinging", 2026-01-05 02:30:01.589338 | orchestrator | "quincy", 2026-01-05 02:30:01.589349 | orchestrator | "reef" 2026-01-05 02:30:01.589360 | orchestrator | ] 2026-01-05 02:30:01.589370 | orchestrator | }, 2026-01-05 02:30:01.589380 | orchestrator | "monmap": { 2026-01-05 02:30:01.589391 | orchestrator | "epoch": 1, 2026-01-05 02:30:01.589402 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-01-05 02:30:01.589415 | orchestrator | "modified": "2026-01-05T01:19:16.306783Z", 2026-01-05 02:30:01.589427 | orchestrator | "created": "2026-01-05T01:19:16.306783Z", 2026-01-05 02:30:01.589438 | orchestrator | "min_mon_release": 18, 2026-01-05 02:30:01.589450 | orchestrator | "min_mon_release_name": "reef", 2026-01-05 02:30:01.589461 | orchestrator | "election_strategy": 1, 2026-01-05 02:30:01.589468 | orchestrator | "disallowed_leaders: ": "", 2026-01-05 02:30:01.589475 | orchestrator | "stretch_mode": false, 2026-01-05 02:30:01.589482 | orchestrator | "tiebreaker_mon": "", 2026-01-05 02:30:01.589489 | orchestrator | "removed_ranks: ": "", 2026-01-05 02:30:01.589496 | orchestrator | "features": { 2026-01-05 02:30:01.589503 | orchestrator | "persistent": [ 2026-01-05 02:30:01.589510 | orchestrator | "kraken", 2026-01-05 02:30:01.589545 | orchestrator | "luminous", 2026-01-05 02:30:01.589553 | orchestrator | "mimic", 2026-01-05 02:30:01.589561 | orchestrator | "osdmap-prune", 2026-01-05 02:30:01.589569 | orchestrator | "nautilus", 2026-01-05 02:30:01.589577 | orchestrator | "octopus", 2026-01-05 02:30:01.589589 | orchestrator | "pacific", 2026-01-05 02:30:01.589599 | orchestrator | "elector-pinging", 2026-01-05 02:30:01.589610 | orchestrator | "quincy", 2026-01-05 02:30:01.589620 | orchestrator | "reef" 2026-01-05 02:30:01.589632 | orchestrator | ], 2026-01-05 02:30:01.589645 | orchestrator | "optional": [] 2026-01-05 02:30:01.589656 | orchestrator | }, 2026-01-05 02:30:01.589670 | orchestrator | "mons": [ 2026-01-05 02:30:01.589694 | orchestrator | { 2026-01-05 02:30:01.589702 | orchestrator | "rank": 0, 2026-01-05 02:30:01.589710 | orchestrator | "name": "testbed-node-0", 2026-01-05 02:30:01.589718 | orchestrator | "public_addrs": { 2026-01-05 02:30:01.589726 | orchestrator | "addrvec": [ 2026-01-05 02:30:01.589735 | orchestrator | { 2026-01-05 02:30:01.589742 | orchestrator | "type": "v2", 2026-01-05 02:30:01.589751 | orchestrator | "addr": "192.168.16.8:3300", 2026-01-05 02:30:01.589761 | orchestrator | "nonce": 0 2026-01-05 02:30:01.589775 | orchestrator | }, 2026-01-05 02:30:01.589813 | orchestrator | { 2026-01-05 02:30:01.589824 | orchestrator | "type": "v1", 2026-01-05 02:30:01.589836 | orchestrator | "addr": "192.168.16.8:6789", 2026-01-05 02:30:01.589846 | orchestrator | "nonce": 0 2026-01-05 02:30:01.589856 | orchestrator | } 2026-01-05 02:30:01.589867 | orchestrator | ] 2026-01-05 02:30:01.589879 | orchestrator | }, 2026-01-05 02:30:01.589891 | orchestrator | "addr": "192.168.16.8:6789/0", 2026-01-05 02:30:01.589902 | orchestrator | "public_addr": "192.168.16.8:6789/0", 2026-01-05 02:30:01.589914 | orchestrator | "priority": 0, 2026-01-05 02:30:01.589926 | orchestrator | "weight": 0, 2026-01-05 02:30:01.589938 | orchestrator | "crush_location": "{}" 2026-01-05 02:30:01.589950 | orchestrator | }, 2026-01-05 02:30:01.589962 | orchestrator | { 2026-01-05 02:30:01.589973 | orchestrator | "rank": 1, 2026-01-05 02:30:01.589984 | orchestrator | "name": "testbed-node-1", 2026-01-05 02:30:01.589995 | orchestrator | "public_addrs": { 2026-01-05 02:30:01.590006 | orchestrator | "addrvec": [ 2026-01-05 02:30:01.590079 | orchestrator | { 2026-01-05 02:30:01.590093 | orchestrator | "type": "v2", 2026-01-05 02:30:01.590105 | orchestrator | "addr": "192.168.16.11:3300", 2026-01-05 02:30:01.590117 | orchestrator | "nonce": 0 2026-01-05 02:30:01.590129 | orchestrator | }, 2026-01-05 02:30:01.590142 | orchestrator | { 2026-01-05 02:30:01.590153 | orchestrator | "type": "v1", 2026-01-05 02:30:01.590164 | orchestrator | "addr": "192.168.16.11:6789", 2026-01-05 02:30:01.590176 | orchestrator | "nonce": 0 2026-01-05 02:30:01.590187 | orchestrator | } 2026-01-05 02:30:01.590199 | orchestrator | ] 2026-01-05 02:30:01.590210 | orchestrator | }, 2026-01-05 02:30:01.590217 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-01-05 02:30:01.590224 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-01-05 02:30:01.590231 | orchestrator | "priority": 0, 2026-01-05 02:30:01.590238 | orchestrator | "weight": 0, 2026-01-05 02:30:01.590245 | orchestrator | "crush_location": "{}" 2026-01-05 02:30:01.590251 | orchestrator | }, 2026-01-05 02:30:01.590258 | orchestrator | { 2026-01-05 02:30:01.590265 | orchestrator | "rank": 2, 2026-01-05 02:30:01.590272 | orchestrator | "name": "testbed-node-2", 2026-01-05 02:30:01.590279 | orchestrator | "public_addrs": { 2026-01-05 02:30:01.590288 | orchestrator | "addrvec": [ 2026-01-05 02:30:01.590300 | orchestrator | { 2026-01-05 02:30:01.590312 | orchestrator | "type": "v2", 2026-01-05 02:30:01.590320 | orchestrator | "addr": "192.168.16.12:3300", 2026-01-05 02:30:01.590326 | orchestrator | "nonce": 0 2026-01-05 02:30:01.590333 | orchestrator | }, 2026-01-05 02:30:01.590340 | orchestrator | { 2026-01-05 02:30:01.590347 | orchestrator | "type": "v1", 2026-01-05 02:30:01.590353 | orchestrator | "addr": "192.168.16.12:6789", 2026-01-05 02:30:01.590360 | orchestrator | "nonce": 0 2026-01-05 02:30:01.590367 | orchestrator | } 2026-01-05 02:30:01.590374 | orchestrator | ] 2026-01-05 02:30:01.590381 | orchestrator | }, 2026-01-05 02:30:01.590388 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-01-05 02:30:01.590394 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-01-05 02:30:01.590401 | orchestrator | "priority": 0, 2026-01-05 02:30:01.590422 | orchestrator | "weight": 0, 2026-01-05 02:30:01.590433 | orchestrator | "crush_location": "{}" 2026-01-05 02:30:01.590446 | orchestrator | } 2026-01-05 02:30:01.590462 | orchestrator | ] 2026-01-05 02:30:01.590473 | orchestrator | } 2026-01-05 02:30:01.590484 | orchestrator | } 2026-01-05 02:30:01.590514 | orchestrator | 2026-01-05 02:30:01.590527 | orchestrator | # Ceph free space status 2026-01-05 02:30:01.590538 | orchestrator | 2026-01-05 02:30:01.590549 | orchestrator | + echo 2026-01-05 02:30:01.590561 | orchestrator | + echo '# Ceph free space status' 2026-01-05 02:30:01.590569 | orchestrator | + echo 2026-01-05 02:30:01.590576 | orchestrator | + ceph df 2026-01-05 02:30:02.234458 | orchestrator | --- RAW STORAGE --- 2026-01-05 02:30:02.234568 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-01-05 02:30:02.234589 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.90 2026-01-05 02:30:02.234606 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.90 2026-01-05 02:30:02.234613 | orchestrator | 2026-01-05 02:30:02.234620 | orchestrator | --- POOLS --- 2026-01-05 02:30:02.234627 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-01-05 02:30:02.234635 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 52 GiB 2026-01-05 02:30:02.234641 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-01-05 02:30:02.234647 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2026-01-05 02:30:02.234653 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-01-05 02:30:02.234659 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-01-05 02:30:02.234665 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-01-05 02:30:02.234671 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-01-05 02:30:02.234677 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-01-05 02:30:02.234683 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 52 GiB 2026-01-05 02:30:02.234689 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-01-05 02:30:02.234695 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2026-01-05 02:30:02.234701 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.97 35 GiB 2026-01-05 02:30:02.234706 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-01-05 02:30:02.234712 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-01-05 02:30:02.286390 | orchestrator | ++ semver 9.5.0 5.0.0 2026-01-05 02:30:02.350411 | orchestrator | + [[ 1 -eq -1 ]] 2026-01-05 02:30:02.350503 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2026-01-05 02:30:02.350514 | orchestrator | + osism apply facts 2026-01-05 02:30:04.458542 | orchestrator | 2026-01-05 02:30:04 | INFO  | Task 3e39e8a6-9715-49da-8856-5e0129ccac81 (facts) was prepared for execution. 2026-01-05 02:30:04.458678 | orchestrator | 2026-01-05 02:30:04 | INFO  | It takes a moment until task 3e39e8a6-9715-49da-8856-5e0129ccac81 (facts) has been started and output is visible here. 2026-01-05 02:30:18.386473 | orchestrator | 2026-01-05 02:30:18.386597 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-01-05 02:30:18.386615 | orchestrator | 2026-01-05 02:30:18.386628 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-05 02:30:18.386640 | orchestrator | Monday 05 January 2026 02:30:09 +0000 (0:00:00.282) 0:00:00.282 ******** 2026-01-05 02:30:18.386651 | orchestrator | ok: [testbed-manager] 2026-01-05 02:30:18.386664 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:30:18.386675 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:30:18.386687 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:30:18.386698 | orchestrator | ok: [testbed-node-3] 2026-01-05 02:30:18.386709 | orchestrator | ok: [testbed-node-4] 2026-01-05 02:30:18.386720 | orchestrator | ok: [testbed-node-5] 2026-01-05 02:30:18.386731 | orchestrator | 2026-01-05 02:30:18.386742 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-05 02:30:18.386781 | orchestrator | Monday 05 January 2026 02:30:10 +0000 (0:00:01.285) 0:00:01.568 ******** 2026-01-05 02:30:18.386792 | orchestrator | skipping: [testbed-manager] 2026-01-05 02:30:18.386804 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:30:18.386815 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:30:18.386827 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:30:18.386910 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:30:18.386930 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:30:18.386947 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:30:18.386964 | orchestrator | 2026-01-05 02:30:18.386980 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-05 02:30:18.386996 | orchestrator | 2026-01-05 02:30:18.387012 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-05 02:30:18.387034 | orchestrator | Monday 05 January 2026 02:30:11 +0000 (0:00:01.394) 0:00:02.962 ******** 2026-01-05 02:30:18.387063 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:30:18.387090 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:30:18.387117 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:30:18.387149 | orchestrator | ok: [testbed-manager] 2026-01-05 02:30:18.387184 | orchestrator | ok: [testbed-node-3] 2026-01-05 02:30:18.387220 | orchestrator | ok: [testbed-node-4] 2026-01-05 02:30:18.387257 | orchestrator | ok: [testbed-node-5] 2026-01-05 02:30:18.387294 | orchestrator | 2026-01-05 02:30:18.387331 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-05 02:30:18.387369 | orchestrator | 2026-01-05 02:30:18.387406 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-05 02:30:18.387444 | orchestrator | Monday 05 January 2026 02:30:17 +0000 (0:00:05.588) 0:00:08.551 ******** 2026-01-05 02:30:18.387479 | orchestrator | skipping: [testbed-manager] 2026-01-05 02:30:18.387504 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:30:18.387524 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:30:18.387557 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:30:18.387585 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:30:18.387613 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:30:18.387639 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:30:18.387665 | orchestrator | 2026-01-05 02:30:18.387692 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 02:30:18.387720 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 02:30:18.387748 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 02:30:18.387775 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 02:30:18.387864 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 02:30:18.387897 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 02:30:18.387927 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 02:30:18.387963 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 02:30:18.387999 | orchestrator | 2026-01-05 02:30:18.388033 | orchestrator | 2026-01-05 02:30:18.388051 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 02:30:18.388068 | orchestrator | Monday 05 January 2026 02:30:17 +0000 (0:00:00.597) 0:00:09.148 ******** 2026-01-05 02:30:18.388085 | orchestrator | =============================================================================== 2026-01-05 02:30:18.388102 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.59s 2026-01-05 02:30:18.388146 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.39s 2026-01-05 02:30:18.388164 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.29s 2026-01-05 02:30:18.388182 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.60s 2026-01-05 02:30:18.713642 | orchestrator | + osism validate ceph-mons 2026-01-05 02:30:52.094111 | orchestrator | 2026-01-05 02:30:52.094233 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-01-05 02:30:52.094246 | orchestrator | 2026-01-05 02:30:52.094255 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-01-05 02:30:52.094263 | orchestrator | Monday 05 January 2026 02:30:35 +0000 (0:00:00.494) 0:00:00.494 ******** 2026-01-05 02:30:52.094271 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-05 02:30:52.094278 | orchestrator | 2026-01-05 02:30:52.094285 | orchestrator | TASK [Create report output directory] ****************************************** 2026-01-05 02:30:52.094293 | orchestrator | Monday 05 January 2026 02:30:36 +0000 (0:00:00.929) 0:00:01.424 ******** 2026-01-05 02:30:52.094300 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-05 02:30:52.094307 | orchestrator | 2026-01-05 02:30:52.094313 | orchestrator | TASK [Define report vars] ****************************************************** 2026-01-05 02:30:52.094320 | orchestrator | Monday 05 January 2026 02:30:37 +0000 (0:00:00.996) 0:00:02.420 ******** 2026-01-05 02:30:52.094327 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:30:52.094335 | orchestrator | 2026-01-05 02:30:52.094341 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-01-05 02:30:52.094347 | orchestrator | Monday 05 January 2026 02:30:37 +0000 (0:00:00.141) 0:00:02.562 ******** 2026-01-05 02:30:52.094353 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:30:52.094359 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:30:52.094365 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:30:52.094372 | orchestrator | 2026-01-05 02:30:52.094377 | orchestrator | TASK [Get container info] ****************************************************** 2026-01-05 02:30:52.094383 | orchestrator | Monday 05 January 2026 02:30:37 +0000 (0:00:00.297) 0:00:02.859 ******** 2026-01-05 02:30:52.094388 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:30:52.094394 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:30:52.094400 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:30:52.094406 | orchestrator | 2026-01-05 02:30:52.094412 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-01-05 02:30:52.094418 | orchestrator | Monday 05 January 2026 02:30:39 +0000 (0:00:01.122) 0:00:03.982 ******** 2026-01-05 02:30:52.094424 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:30:52.094431 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:30:52.094438 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:30:52.094445 | orchestrator | 2026-01-05 02:30:52.094451 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-01-05 02:30:52.094458 | orchestrator | Monday 05 January 2026 02:30:39 +0000 (0:00:00.317) 0:00:04.299 ******** 2026-01-05 02:30:52.094465 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:30:52.094472 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:30:52.094479 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:30:52.094486 | orchestrator | 2026-01-05 02:30:52.094506 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-01-05 02:30:52.094512 | orchestrator | Monday 05 January 2026 02:30:39 +0000 (0:00:00.513) 0:00:04.812 ******** 2026-01-05 02:30:52.094528 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:30:52.094534 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:30:52.094540 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:30:52.094554 | orchestrator | 2026-01-05 02:30:52.094561 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-01-05 02:30:52.094568 | orchestrator | Monday 05 January 2026 02:30:40 +0000 (0:00:00.305) 0:00:05.118 ******** 2026-01-05 02:30:52.094575 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:30:52.094610 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:30:52.094618 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:30:52.094625 | orchestrator | 2026-01-05 02:30:52.094632 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-01-05 02:30:52.094639 | orchestrator | Monday 05 January 2026 02:30:40 +0000 (0:00:00.333) 0:00:05.452 ******** 2026-01-05 02:30:52.094646 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:30:52.094653 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:30:52.094661 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:30:52.094668 | orchestrator | 2026-01-05 02:30:52.094676 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-01-05 02:30:52.094684 | orchestrator | Monday 05 January 2026 02:30:41 +0000 (0:00:00.503) 0:00:05.955 ******** 2026-01-05 02:30:52.094691 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:30:52.094699 | orchestrator | 2026-01-05 02:30:52.094706 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-01-05 02:30:52.094715 | orchestrator | Monday 05 January 2026 02:30:41 +0000 (0:00:00.297) 0:00:06.253 ******** 2026-01-05 02:30:52.094722 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:30:52.094729 | orchestrator | 2026-01-05 02:30:52.094736 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-01-05 02:30:52.094743 | orchestrator | Monday 05 January 2026 02:30:41 +0000 (0:00:00.273) 0:00:06.527 ******** 2026-01-05 02:30:52.094750 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:30:52.094758 | orchestrator | 2026-01-05 02:30:52.094765 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-05 02:30:52.094772 | orchestrator | Monday 05 January 2026 02:30:41 +0000 (0:00:00.291) 0:00:06.818 ******** 2026-01-05 02:30:52.094779 | orchestrator | 2026-01-05 02:30:52.094786 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-05 02:30:52.094794 | orchestrator | Monday 05 January 2026 02:30:42 +0000 (0:00:00.088) 0:00:06.907 ******** 2026-01-05 02:30:52.094801 | orchestrator | 2026-01-05 02:30:52.094808 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-05 02:30:52.094815 | orchestrator | Monday 05 January 2026 02:30:42 +0000 (0:00:00.083) 0:00:06.990 ******** 2026-01-05 02:30:52.094823 | orchestrator | 2026-01-05 02:30:52.094830 | orchestrator | TASK [Print report file information] ******************************************* 2026-01-05 02:30:52.094838 | orchestrator | Monday 05 January 2026 02:30:42 +0000 (0:00:00.084) 0:00:07.074 ******** 2026-01-05 02:30:52.094845 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:30:52.094852 | orchestrator | 2026-01-05 02:30:52.094859 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-01-05 02:30:52.094882 | orchestrator | Monday 05 January 2026 02:30:42 +0000 (0:00:00.272) 0:00:07.347 ******** 2026-01-05 02:30:52.094890 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:30:52.094898 | orchestrator | 2026-01-05 02:30:52.094923 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-01-05 02:30:52.094962 | orchestrator | Monday 05 January 2026 02:30:42 +0000 (0:00:00.238) 0:00:07.586 ******** 2026-01-05 02:30:52.094969 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:30:52.094975 | orchestrator | 2026-01-05 02:30:52.094981 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-01-05 02:30:52.094988 | orchestrator | Monday 05 January 2026 02:30:42 +0000 (0:00:00.116) 0:00:07.703 ******** 2026-01-05 02:30:52.094995 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:30:52.095005 | orchestrator | 2026-01-05 02:30:52.095012 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-01-05 02:30:52.095019 | orchestrator | Monday 05 January 2026 02:30:44 +0000 (0:00:01.655) 0:00:09.358 ******** 2026-01-05 02:30:52.095025 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:30:52.095031 | orchestrator | 2026-01-05 02:30:52.095037 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-01-05 02:30:52.095043 | orchestrator | Monday 05 January 2026 02:30:45 +0000 (0:00:00.559) 0:00:09.918 ******** 2026-01-05 02:30:52.095058 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:30:52.095064 | orchestrator | 2026-01-05 02:30:52.095070 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-01-05 02:30:52.095077 | orchestrator | Monday 05 January 2026 02:30:45 +0000 (0:00:00.135) 0:00:10.054 ******** 2026-01-05 02:30:52.095083 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:30:52.095089 | orchestrator | 2026-01-05 02:30:52.095095 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-01-05 02:30:52.095102 | orchestrator | Monday 05 January 2026 02:30:45 +0000 (0:00:00.377) 0:00:10.431 ******** 2026-01-05 02:30:52.095108 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:30:52.095114 | orchestrator | 2026-01-05 02:30:52.095120 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-01-05 02:30:52.095126 | orchestrator | Monday 05 January 2026 02:30:45 +0000 (0:00:00.332) 0:00:10.764 ******** 2026-01-05 02:30:52.095131 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:30:52.095137 | orchestrator | 2026-01-05 02:30:52.095143 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-01-05 02:30:52.095149 | orchestrator | Monday 05 January 2026 02:30:46 +0000 (0:00:00.131) 0:00:10.896 ******** 2026-01-05 02:30:52.095154 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:30:52.095160 | orchestrator | 2026-01-05 02:30:52.095166 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-01-05 02:30:52.095172 | orchestrator | Monday 05 January 2026 02:30:46 +0000 (0:00:00.153) 0:00:11.050 ******** 2026-01-05 02:30:52.095178 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:30:52.095184 | orchestrator | 2026-01-05 02:30:52.095190 | orchestrator | TASK [Gather status data] ****************************************************** 2026-01-05 02:30:52.095196 | orchestrator | Monday 05 January 2026 02:30:46 +0000 (0:00:00.124) 0:00:11.174 ******** 2026-01-05 02:30:52.095203 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:30:52.095209 | orchestrator | 2026-01-05 02:30:52.095215 | orchestrator | TASK [Set health test data] **************************************************** 2026-01-05 02:30:52.095222 | orchestrator | Monday 05 January 2026 02:30:47 +0000 (0:00:01.415) 0:00:12.590 ******** 2026-01-05 02:30:52.095228 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:30:52.095234 | orchestrator | 2026-01-05 02:30:52.095241 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-01-05 02:30:52.095248 | orchestrator | Monday 05 January 2026 02:30:48 +0000 (0:00:00.338) 0:00:12.928 ******** 2026-01-05 02:30:52.095254 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:30:52.095260 | orchestrator | 2026-01-05 02:30:52.095267 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-01-05 02:30:52.095274 | orchestrator | Monday 05 January 2026 02:30:48 +0000 (0:00:00.153) 0:00:13.081 ******** 2026-01-05 02:30:52.095280 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:30:52.095287 | orchestrator | 2026-01-05 02:30:52.095293 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-01-05 02:30:52.095300 | orchestrator | Monday 05 January 2026 02:30:48 +0000 (0:00:00.154) 0:00:13.236 ******** 2026-01-05 02:30:52.095306 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:30:52.095313 | orchestrator | 2026-01-05 02:30:52.095320 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-01-05 02:30:52.095326 | orchestrator | Monday 05 January 2026 02:30:48 +0000 (0:00:00.147) 0:00:13.384 ******** 2026-01-05 02:30:52.095343 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:30:52.095350 | orchestrator | 2026-01-05 02:30:52.095356 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-01-05 02:30:52.095363 | orchestrator | Monday 05 January 2026 02:30:48 +0000 (0:00:00.326) 0:00:13.710 ******** 2026-01-05 02:30:52.095370 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-05 02:30:52.095377 | orchestrator | 2026-01-05 02:30:52.095384 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-01-05 02:30:52.095391 | orchestrator | Monday 05 January 2026 02:30:49 +0000 (0:00:00.312) 0:00:14.023 ******** 2026-01-05 02:30:52.095406 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:30:52.095413 | orchestrator | 2026-01-05 02:30:52.095419 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-01-05 02:30:52.095426 | orchestrator | Monday 05 January 2026 02:30:49 +0000 (0:00:00.290) 0:00:14.313 ******** 2026-01-05 02:30:52.095432 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-05 02:30:52.095438 | orchestrator | 2026-01-05 02:30:52.095444 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-01-05 02:30:52.095450 | orchestrator | Monday 05 January 2026 02:30:51 +0000 (0:00:01.838) 0:00:16.152 ******** 2026-01-05 02:30:52.095456 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-05 02:30:52.095462 | orchestrator | 2026-01-05 02:30:52.095468 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-01-05 02:30:52.095474 | orchestrator | Monday 05 January 2026 02:30:51 +0000 (0:00:00.278) 0:00:16.430 ******** 2026-01-05 02:30:52.095480 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-05 02:30:52.095487 | orchestrator | 2026-01-05 02:30:52.095506 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-05 02:30:54.893656 | orchestrator | Monday 05 January 2026 02:30:51 +0000 (0:00:00.262) 0:00:16.693 ******** 2026-01-05 02:30:54.893829 | orchestrator | 2026-01-05 02:30:54.893851 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-05 02:30:54.893864 | orchestrator | Monday 05 January 2026 02:30:51 +0000 (0:00:00.073) 0:00:16.767 ******** 2026-01-05 02:30:54.893875 | orchestrator | 2026-01-05 02:30:54.893889 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-05 02:30:54.893900 | orchestrator | Monday 05 January 2026 02:30:51 +0000 (0:00:00.071) 0:00:16.838 ******** 2026-01-05 02:30:54.894065 | orchestrator | 2026-01-05 02:30:54.894084 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-01-05 02:30:54.894096 | orchestrator | Monday 05 January 2026 02:30:52 +0000 (0:00:00.106) 0:00:16.944 ******** 2026-01-05 02:30:54.894109 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-05 02:30:54.894121 | orchestrator | 2026-01-05 02:30:54.894134 | orchestrator | TASK [Print report file information] ******************************************* 2026-01-05 02:30:54.894146 | orchestrator | Monday 05 January 2026 02:30:53 +0000 (0:00:01.558) 0:00:18.503 ******** 2026-01-05 02:30:54.894158 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-01-05 02:30:54.894173 | orchestrator |  "msg": [ 2026-01-05 02:30:54.894188 | orchestrator |  "Validator run completed.", 2026-01-05 02:30:54.894201 | orchestrator |  "You can find the report file here:", 2026-01-05 02:30:54.894215 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-01-05T02:30:36+00:00-report.json", 2026-01-05 02:30:54.894229 | orchestrator |  "on the following host:", 2026-01-05 02:30:54.894242 | orchestrator |  "testbed-manager" 2026-01-05 02:30:54.894255 | orchestrator |  ] 2026-01-05 02:30:54.894269 | orchestrator | } 2026-01-05 02:30:54.894282 | orchestrator | 2026-01-05 02:30:54.894296 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 02:30:54.894310 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-01-05 02:30:54.894326 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 02:30:54.894339 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 02:30:54.894352 | orchestrator | 2026-01-05 02:30:54.894365 | orchestrator | 2026-01-05 02:30:54.894379 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 02:30:54.894392 | orchestrator | Monday 05 January 2026 02:30:54 +0000 (0:00:00.886) 0:00:19.390 ******** 2026-01-05 02:30:54.894431 | orchestrator | =============================================================================== 2026-01-05 02:30:54.894444 | orchestrator | Aggregate test results step one ----------------------------------------- 1.84s 2026-01-05 02:30:54.894457 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.66s 2026-01-05 02:30:54.894470 | orchestrator | Write report file ------------------------------------------------------- 1.56s 2026-01-05 02:30:54.894483 | orchestrator | Gather status data ------------------------------------------------------ 1.42s 2026-01-05 02:30:54.894496 | orchestrator | Get container info ------------------------------------------------------ 1.12s 2026-01-05 02:30:54.894509 | orchestrator | Create report output directory ------------------------------------------ 1.00s 2026-01-05 02:30:54.894522 | orchestrator | Get timestamp for report file ------------------------------------------- 0.93s 2026-01-05 02:30:54.894536 | orchestrator | Print report file information ------------------------------------------- 0.89s 2026-01-05 02:30:54.894549 | orchestrator | Set quorum test data ---------------------------------------------------- 0.56s 2026-01-05 02:30:54.894560 | orchestrator | Set test result to passed if container is existing ---------------------- 0.51s 2026-01-05 02:30:54.894605 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.50s 2026-01-05 02:30:54.894617 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.38s 2026-01-05 02:30:54.894628 | orchestrator | Set health test data ---------------------------------------------------- 0.34s 2026-01-05 02:30:54.894639 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.33s 2026-01-05 02:30:54.894650 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.33s 2026-01-05 02:30:54.894662 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.33s 2026-01-05 02:30:54.894673 | orchestrator | Set test result to failed if container is missing ----------------------- 0.32s 2026-01-05 02:30:54.894684 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.31s 2026-01-05 02:30:54.894695 | orchestrator | Prepare test data ------------------------------------------------------- 0.31s 2026-01-05 02:30:54.894706 | orchestrator | Aggregate test results step one ----------------------------------------- 0.30s 2026-01-05 02:30:55.244334 | orchestrator | + osism validate ceph-mgrs 2026-01-05 02:31:27.497200 | orchestrator | 2026-01-05 02:31:27.497293 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-01-05 02:31:27.497300 | orchestrator | 2026-01-05 02:31:27.497306 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-01-05 02:31:27.497311 | orchestrator | Monday 05 January 2026 02:31:12 +0000 (0:00:00.456) 0:00:00.456 ******** 2026-01-05 02:31:27.497316 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-05 02:31:27.497321 | orchestrator | 2026-01-05 02:31:27.497325 | orchestrator | TASK [Create report output directory] ****************************************** 2026-01-05 02:31:27.497329 | orchestrator | Monday 05 January 2026 02:31:12 +0000 (0:00:00.845) 0:00:01.301 ******** 2026-01-05 02:31:27.497334 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-05 02:31:27.497338 | orchestrator | 2026-01-05 02:31:27.497342 | orchestrator | TASK [Define report vars] ****************************************************** 2026-01-05 02:31:27.497346 | orchestrator | Monday 05 January 2026 02:31:14 +0000 (0:00:01.019) 0:00:02.321 ******** 2026-01-05 02:31:27.497350 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:31:27.497355 | orchestrator | 2026-01-05 02:31:27.497359 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-01-05 02:31:27.497363 | orchestrator | Monday 05 January 2026 02:31:14 +0000 (0:00:00.129) 0:00:02.451 ******** 2026-01-05 02:31:27.497367 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:31:27.497370 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:31:27.497374 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:31:27.497378 | orchestrator | 2026-01-05 02:31:27.497382 | orchestrator | TASK [Get container info] ****************************************************** 2026-01-05 02:31:27.497386 | orchestrator | Monday 05 January 2026 02:31:14 +0000 (0:00:00.310) 0:00:02.762 ******** 2026-01-05 02:31:27.497406 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:31:27.497410 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:31:27.497413 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:31:27.497417 | orchestrator | 2026-01-05 02:31:27.497421 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-01-05 02:31:27.497425 | orchestrator | Monday 05 January 2026 02:31:15 +0000 (0:00:01.147) 0:00:03.909 ******** 2026-01-05 02:31:27.497429 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:31:27.497433 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:31:27.497437 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:31:27.497441 | orchestrator | 2026-01-05 02:31:27.497445 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-01-05 02:31:27.497449 | orchestrator | Monday 05 January 2026 02:31:15 +0000 (0:00:00.294) 0:00:04.203 ******** 2026-01-05 02:31:27.497453 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:31:27.497457 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:31:27.497461 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:31:27.497465 | orchestrator | 2026-01-05 02:31:27.497468 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-01-05 02:31:27.497472 | orchestrator | Monday 05 January 2026 02:31:16 +0000 (0:00:00.579) 0:00:04.783 ******** 2026-01-05 02:31:27.497476 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:31:27.497480 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:31:27.497484 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:31:27.497488 | orchestrator | 2026-01-05 02:31:27.497491 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-01-05 02:31:27.497495 | orchestrator | Monday 05 January 2026 02:31:16 +0000 (0:00:00.359) 0:00:05.142 ******** 2026-01-05 02:31:27.497499 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:31:27.497503 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:31:27.497507 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:31:27.497511 | orchestrator | 2026-01-05 02:31:27.497515 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-01-05 02:31:27.497518 | orchestrator | Monday 05 January 2026 02:31:17 +0000 (0:00:00.340) 0:00:05.483 ******** 2026-01-05 02:31:27.497522 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:31:27.497526 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:31:27.497530 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:31:27.497534 | orchestrator | 2026-01-05 02:31:27.497538 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-01-05 02:31:27.497542 | orchestrator | Monday 05 January 2026 02:31:17 +0000 (0:00:00.513) 0:00:05.996 ******** 2026-01-05 02:31:27.497546 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:31:27.497549 | orchestrator | 2026-01-05 02:31:27.497554 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-01-05 02:31:27.497560 | orchestrator | Monday 05 January 2026 02:31:17 +0000 (0:00:00.258) 0:00:06.255 ******** 2026-01-05 02:31:27.497566 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:31:27.497575 | orchestrator | 2026-01-05 02:31:27.497583 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-01-05 02:31:27.497589 | orchestrator | Monday 05 January 2026 02:31:18 +0000 (0:00:00.284) 0:00:06.539 ******** 2026-01-05 02:31:27.497594 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:31:27.497601 | orchestrator | 2026-01-05 02:31:27.497607 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-05 02:31:27.497613 | orchestrator | Monday 05 January 2026 02:31:18 +0000 (0:00:00.292) 0:00:06.831 ******** 2026-01-05 02:31:27.497619 | orchestrator | 2026-01-05 02:31:27.497625 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-05 02:31:27.497631 | orchestrator | Monday 05 January 2026 02:31:18 +0000 (0:00:00.072) 0:00:06.904 ******** 2026-01-05 02:31:27.497637 | orchestrator | 2026-01-05 02:31:27.497643 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-05 02:31:27.497649 | orchestrator | Monday 05 January 2026 02:31:18 +0000 (0:00:00.070) 0:00:06.974 ******** 2026-01-05 02:31:27.497662 | orchestrator | 2026-01-05 02:31:27.497669 | orchestrator | TASK [Print report file information] ******************************************* 2026-01-05 02:31:27.497675 | orchestrator | Monday 05 January 2026 02:31:18 +0000 (0:00:00.077) 0:00:07.052 ******** 2026-01-05 02:31:27.497692 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:31:27.497702 | orchestrator | 2026-01-05 02:31:27.497706 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-01-05 02:31:27.497710 | orchestrator | Monday 05 January 2026 02:31:19 +0000 (0:00:00.288) 0:00:07.341 ******** 2026-01-05 02:31:27.497714 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:31:27.497718 | orchestrator | 2026-01-05 02:31:27.497736 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-01-05 02:31:27.497741 | orchestrator | Monday 05 January 2026 02:31:19 +0000 (0:00:00.262) 0:00:07.604 ******** 2026-01-05 02:31:27.497746 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:31:27.497750 | orchestrator | 2026-01-05 02:31:27.497754 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-01-05 02:31:27.497759 | orchestrator | Monday 05 January 2026 02:31:19 +0000 (0:00:00.128) 0:00:07.733 ******** 2026-01-05 02:31:27.497763 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:31:27.497768 | orchestrator | 2026-01-05 02:31:27.497772 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-01-05 02:31:27.497777 | orchestrator | Monday 05 January 2026 02:31:21 +0000 (0:00:02.148) 0:00:09.881 ******** 2026-01-05 02:31:27.497781 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:31:27.497786 | orchestrator | 2026-01-05 02:31:27.497804 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-01-05 02:31:27.497808 | orchestrator | Monday 05 January 2026 02:31:22 +0000 (0:00:00.474) 0:00:10.355 ******** 2026-01-05 02:31:27.497813 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:31:27.497817 | orchestrator | 2026-01-05 02:31:27.497822 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-01-05 02:31:27.497826 | orchestrator | Monday 05 January 2026 02:31:22 +0000 (0:00:00.390) 0:00:10.746 ******** 2026-01-05 02:31:27.497831 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:31:27.497835 | orchestrator | 2026-01-05 02:31:27.497840 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-01-05 02:31:27.497845 | orchestrator | Monday 05 January 2026 02:31:22 +0000 (0:00:00.146) 0:00:10.892 ******** 2026-01-05 02:31:27.497849 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:31:27.497854 | orchestrator | 2026-01-05 02:31:27.497858 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-01-05 02:31:27.497863 | orchestrator | Monday 05 January 2026 02:31:22 +0000 (0:00:00.140) 0:00:11.032 ******** 2026-01-05 02:31:27.497867 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-05 02:31:27.497872 | orchestrator | 2026-01-05 02:31:27.497877 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-01-05 02:31:27.497881 | orchestrator | Monday 05 January 2026 02:31:23 +0000 (0:00:00.304) 0:00:11.336 ******** 2026-01-05 02:31:27.497886 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:31:27.497890 | orchestrator | 2026-01-05 02:31:27.497894 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-01-05 02:31:27.497899 | orchestrator | Monday 05 January 2026 02:31:23 +0000 (0:00:00.292) 0:00:11.629 ******** 2026-01-05 02:31:27.497903 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-05 02:31:27.497908 | orchestrator | 2026-01-05 02:31:27.497912 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-01-05 02:31:27.497917 | orchestrator | Monday 05 January 2026 02:31:24 +0000 (0:00:01.323) 0:00:12.952 ******** 2026-01-05 02:31:27.497921 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-05 02:31:27.497926 | orchestrator | 2026-01-05 02:31:27.497930 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-01-05 02:31:27.497935 | orchestrator | Monday 05 January 2026 02:31:24 +0000 (0:00:00.261) 0:00:13.214 ******** 2026-01-05 02:31:27.497944 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-05 02:31:27.497949 | orchestrator | 2026-01-05 02:31:27.497953 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-05 02:31:27.497958 | orchestrator | Monday 05 January 2026 02:31:25 +0000 (0:00:00.275) 0:00:13.490 ******** 2026-01-05 02:31:27.497963 | orchestrator | 2026-01-05 02:31:27.497967 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-05 02:31:27.497972 | orchestrator | Monday 05 January 2026 02:31:25 +0000 (0:00:00.079) 0:00:13.570 ******** 2026-01-05 02:31:27.497976 | orchestrator | 2026-01-05 02:31:27.497980 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-05 02:31:27.497985 | orchestrator | Monday 05 January 2026 02:31:25 +0000 (0:00:00.075) 0:00:13.645 ******** 2026-01-05 02:31:27.497990 | orchestrator | 2026-01-05 02:31:27.497994 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-01-05 02:31:27.497999 | orchestrator | Monday 05 January 2026 02:31:25 +0000 (0:00:00.305) 0:00:13.951 ******** 2026-01-05 02:31:27.498003 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-05 02:31:27.498008 | orchestrator | 2026-01-05 02:31:27.498067 | orchestrator | TASK [Print report file information] ******************************************* 2026-01-05 02:31:27.498073 | orchestrator | Monday 05 January 2026 02:31:27 +0000 (0:00:01.401) 0:00:15.352 ******** 2026-01-05 02:31:27.498078 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-01-05 02:31:27.498083 | orchestrator |  "msg": [ 2026-01-05 02:31:27.498088 | orchestrator |  "Validator run completed.", 2026-01-05 02:31:27.498097 | orchestrator |  "You can find the report file here:", 2026-01-05 02:31:27.498102 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-01-05T02:31:12+00:00-report.json", 2026-01-05 02:31:27.498108 | orchestrator |  "on the following host:", 2026-01-05 02:31:27.498112 | orchestrator |  "testbed-manager" 2026-01-05 02:31:27.498117 | orchestrator |  ] 2026-01-05 02:31:27.498121 | orchestrator | } 2026-01-05 02:31:27.498125 | orchestrator | 2026-01-05 02:31:27.498129 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 02:31:27.498134 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-05 02:31:27.498140 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 02:31:27.498149 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 02:31:27.831735 | orchestrator | 2026-01-05 02:31:27.831825 | orchestrator | 2026-01-05 02:31:27.831833 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 02:31:27.831840 | orchestrator | Monday 05 January 2026 02:31:27 +0000 (0:00:00.446) 0:00:15.799 ******** 2026-01-05 02:31:27.831850 | orchestrator | =============================================================================== 2026-01-05 02:31:27.831855 | orchestrator | Gather list of mgr modules ---------------------------------------------- 2.15s 2026-01-05 02:31:27.831859 | orchestrator | Write report file ------------------------------------------------------- 1.40s 2026-01-05 02:31:27.831863 | orchestrator | Aggregate test results step one ----------------------------------------- 1.32s 2026-01-05 02:31:27.831874 | orchestrator | Get container info ------------------------------------------------------ 1.15s 2026-01-05 02:31:27.831879 | orchestrator | Create report output directory ------------------------------------------ 1.02s 2026-01-05 02:31:27.831883 | orchestrator | Get timestamp for report file ------------------------------------------- 0.85s 2026-01-05 02:31:27.831886 | orchestrator | Set test result to passed if container is existing ---------------------- 0.58s 2026-01-05 02:31:27.831891 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.51s 2026-01-05 02:31:27.831912 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.47s 2026-01-05 02:31:27.831916 | orchestrator | Flush handlers ---------------------------------------------------------- 0.46s 2026-01-05 02:31:27.831920 | orchestrator | Print report file information ------------------------------------------- 0.45s 2026-01-05 02:31:27.831924 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.39s 2026-01-05 02:31:27.831928 | orchestrator | Prepare test data ------------------------------------------------------- 0.36s 2026-01-05 02:31:27.831932 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.34s 2026-01-05 02:31:27.831936 | orchestrator | Prepare test data for container existance test -------------------------- 0.31s 2026-01-05 02:31:27.831939 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.30s 2026-01-05 02:31:27.831943 | orchestrator | Set test result to failed if container is missing ----------------------- 0.29s 2026-01-05 02:31:27.831947 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.29s 2026-01-05 02:31:27.831951 | orchestrator | Aggregate test results step three --------------------------------------- 0.29s 2026-01-05 02:31:27.831955 | orchestrator | Print report file information ------------------------------------------- 0.29s 2026-01-05 02:31:28.171968 | orchestrator | + osism validate ceph-osds 2026-01-05 02:31:49.748219 | orchestrator | 2026-01-05 02:31:49.748351 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-01-05 02:31:49.748368 | orchestrator | 2026-01-05 02:31:49.748377 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-01-05 02:31:49.748387 | orchestrator | Monday 05 January 2026 02:31:44 +0000 (0:00:00.429) 0:00:00.429 ******** 2026-01-05 02:31:49.748397 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-05 02:31:49.748405 | orchestrator | 2026-01-05 02:31:49.748414 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-05 02:31:49.748423 | orchestrator | Monday 05 January 2026 02:31:45 +0000 (0:00:00.859) 0:00:01.288 ******** 2026-01-05 02:31:49.748431 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-05 02:31:49.748441 | orchestrator | 2026-01-05 02:31:49.748449 | orchestrator | TASK [Create report output directory] ****************************************** 2026-01-05 02:31:49.748459 | orchestrator | Monday 05 January 2026 02:31:46 +0000 (0:00:00.586) 0:00:01.874 ******** 2026-01-05 02:31:49.748468 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-05 02:31:49.748477 | orchestrator | 2026-01-05 02:31:49.748485 | orchestrator | TASK [Define report vars] ****************************************************** 2026-01-05 02:31:49.748493 | orchestrator | Monday 05 January 2026 02:31:47 +0000 (0:00:00.810) 0:00:02.685 ******** 2026-01-05 02:31:49.748502 | orchestrator | ok: [testbed-node-3] 2026-01-05 02:31:49.748509 | orchestrator | 2026-01-05 02:31:49.748515 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-01-05 02:31:49.748524 | orchestrator | Monday 05 January 2026 02:31:47 +0000 (0:00:00.133) 0:00:02.818 ******** 2026-01-05 02:31:49.748539 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:31:49.748547 | orchestrator | 2026-01-05 02:31:49.748556 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-01-05 02:31:49.748564 | orchestrator | Monday 05 January 2026 02:31:47 +0000 (0:00:00.144) 0:00:02.963 ******** 2026-01-05 02:31:49.748572 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:31:49.748581 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:31:49.748590 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:31:49.748599 | orchestrator | 2026-01-05 02:31:49.748623 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-01-05 02:31:49.748631 | orchestrator | Monday 05 January 2026 02:31:47 +0000 (0:00:00.310) 0:00:03.274 ******** 2026-01-05 02:31:49.748640 | orchestrator | ok: [testbed-node-3] 2026-01-05 02:31:49.748649 | orchestrator | 2026-01-05 02:31:49.748658 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-01-05 02:31:49.748690 | orchestrator | Monday 05 January 2026 02:31:47 +0000 (0:00:00.167) 0:00:03.441 ******** 2026-01-05 02:31:49.748697 | orchestrator | ok: [testbed-node-3] 2026-01-05 02:31:49.748702 | orchestrator | ok: [testbed-node-4] 2026-01-05 02:31:49.748709 | orchestrator | ok: [testbed-node-5] 2026-01-05 02:31:49.748715 | orchestrator | 2026-01-05 02:31:49.748721 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-01-05 02:31:49.748727 | orchestrator | Monday 05 January 2026 02:31:48 +0000 (0:00:00.333) 0:00:03.774 ******** 2026-01-05 02:31:49.748733 | orchestrator | ok: [testbed-node-3] 2026-01-05 02:31:49.748739 | orchestrator | 2026-01-05 02:31:49.748745 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-01-05 02:31:49.748752 | orchestrator | Monday 05 January 2026 02:31:49 +0000 (0:00:00.788) 0:00:04.562 ******** 2026-01-05 02:31:49.748758 | orchestrator | ok: [testbed-node-3] 2026-01-05 02:31:49.748764 | orchestrator | ok: [testbed-node-4] 2026-01-05 02:31:49.748770 | orchestrator | ok: [testbed-node-5] 2026-01-05 02:31:49.748776 | orchestrator | 2026-01-05 02:31:49.748785 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-01-05 02:31:49.748793 | orchestrator | Monday 05 January 2026 02:31:49 +0000 (0:00:00.342) 0:00:04.905 ******** 2026-01-05 02:31:49.748809 | orchestrator | skipping: [testbed-node-3] => (item={'id': '59dc7a7820db3b31ead8b88817b06973382688e7dbd3d624614b4fc0d959a0c8', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-01-05 02:31:49.748824 | orchestrator | skipping: [testbed-node-3] => (item={'id': '03094345c9ddd29ccec1234170728da0cd4b1435bc2c95bc41b4f99659f2f79a', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-01-05 02:31:49.748834 | orchestrator | skipping: [testbed-node-3] => (item={'id': '972d45474f64323f5129f15cc22991e5c221ef6d5a8eec87803ea45661857eab', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 11 minutes'})  2026-01-05 02:31:49.748843 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0d04b2b732a46618eedee6c96bfc3fdbfd913238b75fc945a85c8715eb705189', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-01-05 02:31:49.748852 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0ccff20498be36fe76ae2f503714e010b43c5cd181e486d0f6484ee6abbc299c', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-01-05 02:31:49.748885 | orchestrator | skipping: [testbed-node-3] => (item={'id': '365b6f1eefbce6b941bc6153d4aa16f2b26d6719bf018f8bc441efba8056d49a', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-01-05 02:31:49.748895 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd525eacfa3da67dbdcd2d775459d44f0c978874889855edb6705c3a8921a19c6', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-01-05 02:31:49.748904 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b5b41fa9be4df9fe794b589643e511951563e57cb46db1ee035b2ab9b1a422e0', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 48 minutes (healthy)'})  2026-01-05 02:31:49.748913 | orchestrator | skipping: [testbed-node-3] => (item={'id': '428a5367d2b4e4c05f7e8ec02e9775e500d4493a0ad6071f597bd0dce4ef0866', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-01-05 02:31:49.748932 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'df26f66a55e3dc89750de435090591373f2e7ed81e40faec4cc95c15a70f4099', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-01-05 02:31:49.748942 | orchestrator | skipping: [testbed-node-3] => (item={'id': '274f591b0f2386595978a5bbe6bbc761cbcce544ddf1be893041f99015c419c5', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-01-05 02:31:49.748953 | orchestrator | ok: [testbed-node-3] => (item={'id': '13dda494b5cfa28383be06d36faf74ae86e909c42df192dd0f09776a14231531', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up About an hour'}) 2026-01-05 02:31:49.748963 | orchestrator | ok: [testbed-node-3] => (item={'id': '004281eb6fb664f342929f44afccba646061099e916b479f26ca037af61a58ef', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up About an hour'}) 2026-01-05 02:31:49.748972 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6af62fca0a0329814367349e309e26a0af32e06ad2460bcd1761e7d69508e159', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-01-05 02:31:49.748980 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6895dfdaace29a0e83367b93e7602957a89c7685a0ffbe4fea3929dc04133e62', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-01-05 02:31:49.748989 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b82baa60053e41ae002061e7a150d0bbcd29e4b0cbff4425de796bb8c6747112', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-01-05 02:31:49.748998 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0457241cfcfb8e7bc1a3a7beac3a6a48c889e6a438b797fb3e64e4612dab53d8', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-01-05 02:31:49.749007 | orchestrator | skipping: [testbed-node-3] => (item={'id': '85cafb1877d28723965e4adae42294f42bc0d94ec2791ae370ade15575a3b680', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-01-05 02:31:49.749016 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6e0da4c938be45b6d2e478e4954ffb9c10fd03b3543eda7cc93884eae2973ad0', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-01-05 02:31:49.749025 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0358b3f6c99bbf5c1d88bbb6e9462e9842c078d6be8db7b4c9613a1a8a2fdab8', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-01-05 02:31:49.749043 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9e001dc95e0b57eb281487d5bfee5ebe9f2ee45f628a86ed198026b945d7a4a1', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-01-05 02:31:50.010465 | orchestrator | skipping: [testbed-node-4] => (item={'id': '08b247cb746f0d52915ce17d64854b4014aaab49b2ef5b2806b9336e0237b288', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 11 minutes'})  2026-01-05 02:31:50.010599 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'be259097f5068fe5b8f467bd7ca12fa155c0b2b7a70cae11be3d5e6bfb9f830d', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-01-05 02:31:50.010628 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9f750461f1f2f480652fdee3029ab844dd314b5fb244b6f71a68c60624a840bf', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-01-05 02:31:50.010639 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0b455cc68ddfe2e28dc62f03deecc6567c36c661c5fff2a7d5cd2e242bb8e06c', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-01-05 02:31:50.010651 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b63a6aad8dcb101aadb32f786af751725ec7ab56ef9ef6ce76950595ff701537', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-01-05 02:31:50.010659 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f3c2bd5a99624041cba6d610885f9b84dace44b1732881448cb700c621600956', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 48 minutes (healthy)'})  2026-01-05 02:31:50.010666 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ec49a3605b27f69b3c35942fd2cec569cfa272bf73e2a2f2595241ef3c3bf58d', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-01-05 02:31:50.010675 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9f78e86dfe7d819c0cd30013a35e14820847e757325ba42c6662311e0bf3f377', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-01-05 02:31:50.010682 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e3c1f385e64d645a9b980be111ae644cc1cfe1e531f0a6c4218263e8d5074caa', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-01-05 02:31:50.010691 | orchestrator | ok: [testbed-node-4] => (item={'id': '23fb58c54e8927539831cb9f64b5bf7b1c49fe94c71f76bff7bd7449969f50d7', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up About an hour'}) 2026-01-05 02:31:50.010699 | orchestrator | ok: [testbed-node-4] => (item={'id': '6c4f962e9e047a54acb3499f0986dd129985c2e524d03790d20c182b8433c9d1', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up About an hour'}) 2026-01-05 02:31:50.010706 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a3d57ca5185e0922c1af8fb9e5c79c08f17153c4efad02f25b13d7af16eb1f7a', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-01-05 02:31:50.010713 | orchestrator | skipping: [testbed-node-4] => (item={'id': '6ecb7a6d674dea19d76b47d90be43e4334f6a2bf9815422fffa06e510916c667', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-01-05 02:31:50.010720 | orchestrator | skipping: [testbed-node-4] => (item={'id': '13e4f4ca9cdd3a5bcf585919e0bc6d4b975651022b6080bf2c29eb8d23e663af', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-01-05 02:31:50.010742 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'df2a6c205c77d30266aa2a3636f8c6c179b314848c00f66e5c3678e48cd0df85', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-01-05 02:31:50.010755 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5dbe317feefebe10d27ce5050c920dedc07d284e890395936eacc3949ab981f5', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-01-05 02:31:50.010762 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a00c51b495f9625f8430d469048b9a308449181b1bffba2ca07cf9f082f4ae37', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-01-05 02:31:50.010769 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ef7b33168ca094d5376ec0d9fdf0eca248274ad1fa84c750f7b456a24113f604', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-01-05 02:31:50.010776 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd5212928f60539e350dc93cf082488cf5a41ea222be404a92ac20184d3d15ee2', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-01-05 02:31:50.010787 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5bd92f6bd76add8813f793ca80d4857d5f9396b25d3f417ec9275dd737f73526', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 11 minutes'})  2026-01-05 02:31:50.010794 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'cc32db0581e3f25332bf14c167b69ca056bbff727b09e1a9375d13caf10d7d51', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-01-05 02:31:50.010802 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c12dd0c31517f05010e1aeddde3cbeb6be517137acadab076a22f837426ee1e6', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-01-05 02:31:50.010809 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'fb0bebd1e573e73985e52d546ad7849e6d5d9791e06d30955004a4bfabe9f67c', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-01-05 02:31:50.010816 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a8ef3e3def7fdf6d4dcf55ade30620b2d5f6e145340f9ea92b3015ac6878df61', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-01-05 02:31:50.010823 | orchestrator | skipping: [testbed-node-5] => (item={'id': '63cef5ee161248eea255c3df0175bf0b8e4aa857fbc78bc6ba1ef8c8ebb1e829', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 48 minutes (healthy)'})  2026-01-05 02:31:50.010830 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2c46b5c110f7c8cb0dbed71320a5f6e9b6954acdb3b0c5ba303313314e766220', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-01-05 02:31:50.010837 | orchestrator | skipping: [testbed-node-5] => (item={'id': '37b0afae5a08755019bf3db04cb34bc028f6be76b0c833d05371e30b208fca45', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-01-05 02:31:50.010844 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0cffc49f2af5491101b014894d80843b85829175c800c80877f0dd74f6158b22', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-01-05 02:31:50.010856 | orchestrator | ok: [testbed-node-5] => (item={'id': '4d09d79b5420d72133afe62ecc2ca7b4d9073b038dee390a108e8c19e9b5e246', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up About an hour'}) 2026-01-05 02:31:50.010868 | orchestrator | ok: [testbed-node-5] => (item={'id': '810b387136b62508f8e10bd634d6ce36f28c3e4218ea3c60904664c5825d712b', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up About an hour'}) 2026-01-05 02:32:02.339185 | orchestrator | skipping: [testbed-node-5] => (item={'id': '977e3cc8debc1344f3b5b7f7100d8af98af755bec660037cdfaf7de0319c756e', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-01-05 02:32:02.339276 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e4b2effb2efafa9a292989cbc96c990ccb578cb866fac24b69e8c7e4db4a96bd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-01-05 02:32:02.339285 | orchestrator | skipping: [testbed-node-5] => (item={'id': '62b2dfc599607b702f3bc82ec3298109e78f18b957748cf23b974c4f7db933bc', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-01-05 02:32:02.339291 | orchestrator | skipping: [testbed-node-5] => (item={'id': '1b02ed54c5907b453602451a80bbefb19544a0e1fff4aef722849de2f2acae14', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-01-05 02:32:02.339311 | orchestrator | skipping: [testbed-node-5] => (item={'id': '42cec1596245780187ad8a15a277a47dba791b9830e02f45b87f1bae1bac7cdd', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-01-05 02:32:02.339316 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6176aaa27c85965db92a3d12a5ba36abc719fe2cd389333bc90bf482c62727f2', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-01-05 02:32:02.339320 | orchestrator | 2026-01-05 02:32:02.339326 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-01-05 02:32:02.339331 | orchestrator | Monday 05 January 2026 02:31:49 +0000 (0:00:00.534) 0:00:05.440 ******** 2026-01-05 02:32:02.339336 | orchestrator | ok: [testbed-node-3] 2026-01-05 02:32:02.339341 | orchestrator | ok: [testbed-node-4] 2026-01-05 02:32:02.339345 | orchestrator | ok: [testbed-node-5] 2026-01-05 02:32:02.339349 | orchestrator | 2026-01-05 02:32:02.339353 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-01-05 02:32:02.339356 | orchestrator | Monday 05 January 2026 02:31:50 +0000 (0:00:00.304) 0:00:05.745 ******** 2026-01-05 02:32:02.339361 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:32:02.339365 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:32:02.339369 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:32:02.339373 | orchestrator | 2026-01-05 02:32:02.339377 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-01-05 02:32:02.339381 | orchestrator | Monday 05 January 2026 02:31:50 +0000 (0:00:00.527) 0:00:06.272 ******** 2026-01-05 02:32:02.339385 | orchestrator | ok: [testbed-node-3] 2026-01-05 02:32:02.339389 | orchestrator | ok: [testbed-node-4] 2026-01-05 02:32:02.339393 | orchestrator | ok: [testbed-node-5] 2026-01-05 02:32:02.339397 | orchestrator | 2026-01-05 02:32:02.339401 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-01-05 02:32:02.339405 | orchestrator | Monday 05 January 2026 02:31:51 +0000 (0:00:00.332) 0:00:06.605 ******** 2026-01-05 02:32:02.339408 | orchestrator | ok: [testbed-node-3] 2026-01-05 02:32:02.339412 | orchestrator | ok: [testbed-node-4] 2026-01-05 02:32:02.339439 | orchestrator | ok: [testbed-node-5] 2026-01-05 02:32:02.339448 | orchestrator | 2026-01-05 02:32:02.339455 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-01-05 02:32:02.339461 | orchestrator | Monday 05 January 2026 02:31:51 +0000 (0:00:00.312) 0:00:06.918 ******** 2026-01-05 02:32:02.339468 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-01-05 02:32:02.339476 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-01-05 02:32:02.339481 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:32:02.339487 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-01-05 02:32:02.339493 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-01-05 02:32:02.339499 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:32:02.339505 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-01-05 02:32:02.339511 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-01-05 02:32:02.339517 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:32:02.339523 | orchestrator | 2026-01-05 02:32:02.339529 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-01-05 02:32:02.339535 | orchestrator | Monday 05 January 2026 02:31:51 +0000 (0:00:00.333) 0:00:07.251 ******** 2026-01-05 02:32:02.339540 | orchestrator | ok: [testbed-node-3] 2026-01-05 02:32:02.339546 | orchestrator | ok: [testbed-node-4] 2026-01-05 02:32:02.339552 | orchestrator | ok: [testbed-node-5] 2026-01-05 02:32:02.339558 | orchestrator | 2026-01-05 02:32:02.339564 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-01-05 02:32:02.339570 | orchestrator | Monday 05 January 2026 02:31:52 +0000 (0:00:00.534) 0:00:07.786 ******** 2026-01-05 02:32:02.339576 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:32:02.339597 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:32:02.339604 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:32:02.339610 | orchestrator | 2026-01-05 02:32:02.339616 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-01-05 02:32:02.339622 | orchestrator | Monday 05 January 2026 02:31:52 +0000 (0:00:00.362) 0:00:08.149 ******** 2026-01-05 02:32:02.339628 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:32:02.339634 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:32:02.339640 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:32:02.339646 | orchestrator | 2026-01-05 02:32:02.339651 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-01-05 02:32:02.339658 | orchestrator | Monday 05 January 2026 02:31:53 +0000 (0:00:00.328) 0:00:08.478 ******** 2026-01-05 02:32:02.339664 | orchestrator | ok: [testbed-node-3] 2026-01-05 02:32:02.339669 | orchestrator | ok: [testbed-node-4] 2026-01-05 02:32:02.339675 | orchestrator | ok: [testbed-node-5] 2026-01-05 02:32:02.339681 | orchestrator | 2026-01-05 02:32:02.339687 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-01-05 02:32:02.339693 | orchestrator | Monday 05 January 2026 02:31:53 +0000 (0:00:00.342) 0:00:08.820 ******** 2026-01-05 02:32:02.339699 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:32:02.339705 | orchestrator | 2026-01-05 02:32:02.339710 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-01-05 02:32:02.339716 | orchestrator | Monday 05 January 2026 02:31:54 +0000 (0:00:00.734) 0:00:09.554 ******** 2026-01-05 02:32:02.339722 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:32:02.339728 | orchestrator | 2026-01-05 02:32:02.339734 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-01-05 02:32:02.339741 | orchestrator | Monday 05 January 2026 02:31:54 +0000 (0:00:00.256) 0:00:09.811 ******** 2026-01-05 02:32:02.339747 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:32:02.339753 | orchestrator | 2026-01-05 02:32:02.339759 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-05 02:32:02.339771 | orchestrator | Monday 05 January 2026 02:31:54 +0000 (0:00:00.272) 0:00:10.083 ******** 2026-01-05 02:32:02.339778 | orchestrator | 2026-01-05 02:32:02.339784 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-05 02:32:02.339791 | orchestrator | Monday 05 January 2026 02:31:54 +0000 (0:00:00.076) 0:00:10.160 ******** 2026-01-05 02:32:02.339797 | orchestrator | 2026-01-05 02:32:02.339803 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-05 02:32:02.339811 | orchestrator | Monday 05 January 2026 02:31:54 +0000 (0:00:00.074) 0:00:10.234 ******** 2026-01-05 02:32:02.339818 | orchestrator | 2026-01-05 02:32:02.339825 | orchestrator | TASK [Print report file information] ******************************************* 2026-01-05 02:32:02.339831 | orchestrator | Monday 05 January 2026 02:31:54 +0000 (0:00:00.073) 0:00:10.308 ******** 2026-01-05 02:32:02.339838 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:32:02.339844 | orchestrator | 2026-01-05 02:32:02.339850 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-01-05 02:32:02.339856 | orchestrator | Monday 05 January 2026 02:31:55 +0000 (0:00:00.271) 0:00:10.580 ******** 2026-01-05 02:32:02.339863 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:32:02.339869 | orchestrator | 2026-01-05 02:32:02.339875 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-01-05 02:32:02.339882 | orchestrator | Monday 05 January 2026 02:31:55 +0000 (0:00:00.278) 0:00:10.859 ******** 2026-01-05 02:32:02.339888 | orchestrator | ok: [testbed-node-3] 2026-01-05 02:32:02.339894 | orchestrator | ok: [testbed-node-4] 2026-01-05 02:32:02.339901 | orchestrator | ok: [testbed-node-5] 2026-01-05 02:32:02.339908 | orchestrator | 2026-01-05 02:32:02.339915 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-01-05 02:32:02.339921 | orchestrator | Monday 05 January 2026 02:31:55 +0000 (0:00:00.309) 0:00:11.169 ******** 2026-01-05 02:32:02.339928 | orchestrator | ok: [testbed-node-3] 2026-01-05 02:32:02.339935 | orchestrator | 2026-01-05 02:32:02.339942 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-01-05 02:32:02.339948 | orchestrator | Monday 05 January 2026 02:31:56 +0000 (0:00:00.734) 0:00:11.903 ******** 2026-01-05 02:32:02.339955 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-05 02:32:02.339961 | orchestrator | 2026-01-05 02:32:02.339968 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-01-05 02:32:02.339976 | orchestrator | Monday 05 January 2026 02:31:58 +0000 (0:00:01.900) 0:00:13.804 ******** 2026-01-05 02:32:02.339983 | orchestrator | ok: [testbed-node-3] 2026-01-05 02:32:02.339989 | orchestrator | 2026-01-05 02:32:02.339996 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-01-05 02:32:02.340004 | orchestrator | Monday 05 January 2026 02:31:58 +0000 (0:00:00.156) 0:00:13.960 ******** 2026-01-05 02:32:02.340011 | orchestrator | ok: [testbed-node-3] 2026-01-05 02:32:02.340018 | orchestrator | 2026-01-05 02:32:02.340025 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-01-05 02:32:02.340032 | orchestrator | Monday 05 January 2026 02:31:58 +0000 (0:00:00.343) 0:00:14.304 ******** 2026-01-05 02:32:02.340039 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:32:02.340045 | orchestrator | 2026-01-05 02:32:02.340053 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-01-05 02:32:02.340060 | orchestrator | Monday 05 January 2026 02:31:58 +0000 (0:00:00.125) 0:00:14.430 ******** 2026-01-05 02:32:02.340066 | orchestrator | ok: [testbed-node-3] 2026-01-05 02:32:02.340073 | orchestrator | 2026-01-05 02:32:02.340080 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-01-05 02:32:02.340086 | orchestrator | Monday 05 January 2026 02:31:59 +0000 (0:00:00.165) 0:00:14.595 ******** 2026-01-05 02:32:02.340092 | orchestrator | ok: [testbed-node-3] 2026-01-05 02:32:02.340099 | orchestrator | ok: [testbed-node-4] 2026-01-05 02:32:02.340105 | orchestrator | ok: [testbed-node-5] 2026-01-05 02:32:02.340139 | orchestrator | 2026-01-05 02:32:02.340148 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-01-05 02:32:02.340152 | orchestrator | Monday 05 January 2026 02:31:59 +0000 (0:00:00.352) 0:00:14.948 ******** 2026-01-05 02:32:02.340156 | orchestrator | changed: [testbed-node-3] 2026-01-05 02:32:02.340160 | orchestrator | changed: [testbed-node-4] 2026-01-05 02:32:02.340165 | orchestrator | changed: [testbed-node-5] 2026-01-05 02:32:13.220245 | orchestrator | 2026-01-05 02:32:13.220317 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-01-05 02:32:13.220325 | orchestrator | Monday 05 January 2026 02:32:02 +0000 (0:00:02.825) 0:00:17.774 ******** 2026-01-05 02:32:13.220331 | orchestrator | ok: [testbed-node-3] 2026-01-05 02:32:13.220336 | orchestrator | ok: [testbed-node-4] 2026-01-05 02:32:13.220341 | orchestrator | ok: [testbed-node-5] 2026-01-05 02:32:13.220346 | orchestrator | 2026-01-05 02:32:13.220352 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-01-05 02:32:13.220361 | orchestrator | Monday 05 January 2026 02:32:02 +0000 (0:00:00.323) 0:00:18.098 ******** 2026-01-05 02:32:13.220369 | orchestrator | ok: [testbed-node-3] 2026-01-05 02:32:13.220377 | orchestrator | ok: [testbed-node-4] 2026-01-05 02:32:13.220385 | orchestrator | ok: [testbed-node-5] 2026-01-05 02:32:13.220393 | orchestrator | 2026-01-05 02:32:13.220400 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-01-05 02:32:13.220408 | orchestrator | Monday 05 January 2026 02:32:03 +0000 (0:00:00.587) 0:00:18.685 ******** 2026-01-05 02:32:13.220417 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:32:13.220424 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:32:13.220429 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:32:13.220434 | orchestrator | 2026-01-05 02:32:13.220438 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-01-05 02:32:13.220443 | orchestrator | Monday 05 January 2026 02:32:03 +0000 (0:00:00.315) 0:00:19.000 ******** 2026-01-05 02:32:13.220448 | orchestrator | ok: [testbed-node-3] 2026-01-05 02:32:13.220453 | orchestrator | ok: [testbed-node-4] 2026-01-05 02:32:13.220457 | orchestrator | ok: [testbed-node-5] 2026-01-05 02:32:13.220462 | orchestrator | 2026-01-05 02:32:13.220467 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-01-05 02:32:13.220474 | orchestrator | Monday 05 January 2026 02:32:04 +0000 (0:00:00.553) 0:00:19.554 ******** 2026-01-05 02:32:13.220479 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:32:13.220484 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:32:13.220489 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:32:13.220494 | orchestrator | 2026-01-05 02:32:13.220499 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-01-05 02:32:13.220504 | orchestrator | Monday 05 January 2026 02:32:04 +0000 (0:00:00.314) 0:00:19.868 ******** 2026-01-05 02:32:13.220508 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:32:13.220513 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:32:13.220518 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:32:13.220523 | orchestrator | 2026-01-05 02:32:13.220527 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-01-05 02:32:13.220532 | orchestrator | Monday 05 January 2026 02:32:04 +0000 (0:00:00.308) 0:00:20.177 ******** 2026-01-05 02:32:13.220537 | orchestrator | ok: [testbed-node-3] 2026-01-05 02:32:13.220541 | orchestrator | ok: [testbed-node-4] 2026-01-05 02:32:13.220546 | orchestrator | ok: [testbed-node-5] 2026-01-05 02:32:13.220551 | orchestrator | 2026-01-05 02:32:13.220556 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-01-05 02:32:13.220563 | orchestrator | Monday 05 January 2026 02:32:05 +0000 (0:00:00.501) 0:00:20.678 ******** 2026-01-05 02:32:13.220571 | orchestrator | ok: [testbed-node-3] 2026-01-05 02:32:13.220579 | orchestrator | ok: [testbed-node-4] 2026-01-05 02:32:13.220586 | orchestrator | ok: [testbed-node-5] 2026-01-05 02:32:13.220594 | orchestrator | 2026-01-05 02:32:13.220600 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-01-05 02:32:13.220622 | orchestrator | Monday 05 January 2026 02:32:06 +0000 (0:00:00.864) 0:00:21.543 ******** 2026-01-05 02:32:13.220631 | orchestrator | ok: [testbed-node-3] 2026-01-05 02:32:13.220638 | orchestrator | ok: [testbed-node-4] 2026-01-05 02:32:13.220646 | orchestrator | ok: [testbed-node-5] 2026-01-05 02:32:13.220653 | orchestrator | 2026-01-05 02:32:13.220661 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-01-05 02:32:13.220669 | orchestrator | Monday 05 January 2026 02:32:06 +0000 (0:00:00.312) 0:00:21.855 ******** 2026-01-05 02:32:13.220674 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:32:13.220679 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:32:13.220683 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:32:13.220688 | orchestrator | 2026-01-05 02:32:13.220693 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-01-05 02:32:13.220697 | orchestrator | Monday 05 January 2026 02:32:06 +0000 (0:00:00.322) 0:00:22.178 ******** 2026-01-05 02:32:13.220702 | orchestrator | ok: [testbed-node-3] 2026-01-05 02:32:13.220706 | orchestrator | ok: [testbed-node-4] 2026-01-05 02:32:13.220711 | orchestrator | ok: [testbed-node-5] 2026-01-05 02:32:13.220716 | orchestrator | 2026-01-05 02:32:13.220720 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-01-05 02:32:13.220725 | orchestrator | Monday 05 January 2026 02:32:07 +0000 (0:00:00.589) 0:00:22.767 ******** 2026-01-05 02:32:13.220730 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-05 02:32:13.220735 | orchestrator | 2026-01-05 02:32:13.220739 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-01-05 02:32:13.220744 | orchestrator | Monday 05 January 2026 02:32:07 +0000 (0:00:00.303) 0:00:23.071 ******** 2026-01-05 02:32:13.220749 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:32:13.220753 | orchestrator | 2026-01-05 02:32:13.220758 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-01-05 02:32:13.220763 | orchestrator | Monday 05 January 2026 02:32:07 +0000 (0:00:00.291) 0:00:23.362 ******** 2026-01-05 02:32:13.220767 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-05 02:32:13.220772 | orchestrator | 2026-01-05 02:32:13.220776 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-01-05 02:32:13.220781 | orchestrator | Monday 05 January 2026 02:32:09 +0000 (0:00:01.762) 0:00:25.124 ******** 2026-01-05 02:32:13.220786 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-05 02:32:13.220791 | orchestrator | 2026-01-05 02:32:13.220797 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-01-05 02:32:13.220802 | orchestrator | Monday 05 January 2026 02:32:09 +0000 (0:00:00.281) 0:00:25.405 ******** 2026-01-05 02:32:13.220808 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-05 02:32:13.220813 | orchestrator | 2026-01-05 02:32:13.220828 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-05 02:32:13.220834 | orchestrator | Monday 05 January 2026 02:32:10 +0000 (0:00:00.284) 0:00:25.690 ******** 2026-01-05 02:32:13.220839 | orchestrator | 2026-01-05 02:32:13.220844 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-05 02:32:13.220849 | orchestrator | Monday 05 January 2026 02:32:10 +0000 (0:00:00.095) 0:00:25.785 ******** 2026-01-05 02:32:13.220855 | orchestrator | 2026-01-05 02:32:13.220860 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-05 02:32:13.220865 | orchestrator | Monday 05 January 2026 02:32:10 +0000 (0:00:00.079) 0:00:25.865 ******** 2026-01-05 02:32:13.220870 | orchestrator | 2026-01-05 02:32:13.220875 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-01-05 02:32:13.220881 | orchestrator | Monday 05 January 2026 02:32:10 +0000 (0:00:00.081) 0:00:25.947 ******** 2026-01-05 02:32:13.220886 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-05 02:32:13.220891 | orchestrator | 2026-01-05 02:32:13.220896 | orchestrator | TASK [Print report file information] ******************************************* 2026-01-05 02:32:13.220906 | orchestrator | Monday 05 January 2026 02:32:12 +0000 (0:00:01.643) 0:00:27.591 ******** 2026-01-05 02:32:13.220911 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-01-05 02:32:13.220916 | orchestrator |  "msg": [ 2026-01-05 02:32:13.220922 | orchestrator |  "Validator run completed.", 2026-01-05 02:32:13.220928 | orchestrator |  "You can find the report file here:", 2026-01-05 02:32:13.220933 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-01-05T02:31:45+00:00-report.json", 2026-01-05 02:32:13.220942 | orchestrator |  "on the following host:", 2026-01-05 02:32:13.220948 | orchestrator |  "testbed-manager" 2026-01-05 02:32:13.220953 | orchestrator |  ] 2026-01-05 02:32:13.220959 | orchestrator | } 2026-01-05 02:32:13.220964 | orchestrator | 2026-01-05 02:32:13.220968 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 02:32:13.220974 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-05 02:32:13.220981 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-05 02:32:13.220989 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-05 02:32:13.220997 | orchestrator | 2026-01-05 02:32:13.221004 | orchestrator | 2026-01-05 02:32:13.221012 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 02:32:13.221020 | orchestrator | Monday 05 January 2026 02:32:12 +0000 (0:00:00.672) 0:00:28.264 ******** 2026-01-05 02:32:13.221028 | orchestrator | =============================================================================== 2026-01-05 02:32:13.221036 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.83s 2026-01-05 02:32:13.221044 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.90s 2026-01-05 02:32:13.221052 | orchestrator | Aggregate test results step one ----------------------------------------- 1.76s 2026-01-05 02:32:13.221060 | orchestrator | Write report file ------------------------------------------------------- 1.64s 2026-01-05 02:32:13.221065 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.86s 2026-01-05 02:32:13.221070 | orchestrator | Get timestamp for report file ------------------------------------------- 0.86s 2026-01-05 02:32:13.221074 | orchestrator | Create report output directory ------------------------------------------ 0.81s 2026-01-05 02:32:13.221079 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.79s 2026-01-05 02:32:13.221083 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.73s 2026-01-05 02:32:13.221088 | orchestrator | Aggregate test results step one ----------------------------------------- 0.73s 2026-01-05 02:32:13.221092 | orchestrator | Print report file information ------------------------------------------- 0.67s 2026-01-05 02:32:13.221097 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.59s 2026-01-05 02:32:13.221102 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.59s 2026-01-05 02:32:13.221106 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.59s 2026-01-05 02:32:13.221111 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.55s 2026-01-05 02:32:13.221115 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.53s 2026-01-05 02:32:13.221120 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.53s 2026-01-05 02:32:13.221125 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.53s 2026-01-05 02:32:13.221129 | orchestrator | Prepare test data ------------------------------------------------------- 0.50s 2026-01-05 02:32:13.221134 | orchestrator | Set test result to failed if an OSD is not running ---------------------- 0.36s 2026-01-05 02:32:13.600519 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-01-05 02:32:13.612393 | orchestrator | + set -e 2026-01-05 02:32:13.612471 | orchestrator | + source /opt/manager-vars.sh 2026-01-05 02:32:13.613564 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-05 02:32:13.613628 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-05 02:32:13.613640 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-05 02:32:13.613647 | orchestrator | ++ CEPH_VERSION=reef 2026-01-05 02:32:13.613657 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-05 02:32:13.613667 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-05 02:32:13.613677 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-01-05 02:32:13.613686 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-01-05 02:32:13.613695 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-05 02:32:13.613705 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-05 02:32:13.613713 | orchestrator | ++ export ARA=false 2026-01-05 02:32:13.613722 | orchestrator | ++ ARA=false 2026-01-05 02:32:13.613731 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-05 02:32:13.613740 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-05 02:32:13.613749 | orchestrator | ++ export TEMPEST=false 2026-01-05 02:32:13.613758 | orchestrator | ++ TEMPEST=false 2026-01-05 02:32:13.613766 | orchestrator | ++ export IS_ZUUL=true 2026-01-05 02:32:13.613776 | orchestrator | ++ IS_ZUUL=true 2026-01-05 02:32:13.613785 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.95 2026-01-05 02:32:13.613794 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.95 2026-01-05 02:32:13.613802 | orchestrator | ++ export EXTERNAL_API=false 2026-01-05 02:32:13.613810 | orchestrator | ++ EXTERNAL_API=false 2026-01-05 02:32:13.613818 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-05 02:32:13.613827 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-05 02:32:13.613837 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-05 02:32:13.613845 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-05 02:32:13.613854 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-05 02:32:13.613863 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-05 02:32:13.613882 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-01-05 02:32:13.613892 | orchestrator | + source /etc/os-release 2026-01-05 02:32:13.613902 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.3 LTS' 2026-01-05 02:32:13.613911 | orchestrator | ++ NAME=Ubuntu 2026-01-05 02:32:13.613921 | orchestrator | ++ VERSION_ID=24.04 2026-01-05 02:32:13.613930 | orchestrator | ++ VERSION='24.04.3 LTS (Noble Numbat)' 2026-01-05 02:32:13.613939 | orchestrator | ++ VERSION_CODENAME=noble 2026-01-05 02:32:13.613948 | orchestrator | ++ ID=ubuntu 2026-01-05 02:32:13.613957 | orchestrator | ++ ID_LIKE=debian 2026-01-05 02:32:13.613967 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-01-05 02:32:13.613975 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-01-05 02:32:13.613985 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-01-05 02:32:13.613996 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-01-05 02:32:13.614007 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-01-05 02:32:13.614067 | orchestrator | ++ LOGO=ubuntu-logo 2026-01-05 02:32:13.614082 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-01-05 02:32:13.614092 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-01-05 02:32:13.614102 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-01-05 02:32:13.628651 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-01-05 02:32:35.121589 | orchestrator | 2026-01-05 02:32:35.121681 | orchestrator | # Status of Elasticsearch 2026-01-05 02:32:35.121690 | orchestrator | 2026-01-05 02:32:35.121697 | orchestrator | + pushd /opt/configuration/contrib 2026-01-05 02:32:35.121704 | orchestrator | + echo 2026-01-05 02:32:35.121710 | orchestrator | + echo '# Status of Elasticsearch' 2026-01-05 02:32:35.121716 | orchestrator | + echo 2026-01-05 02:32:35.121722 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-01-05 02:32:35.296663 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-01-05 02:32:35.296765 | orchestrator | 2026-01-05 02:32:35.296780 | orchestrator | # Status of MariaDB 2026-01-05 02:32:35.296794 | orchestrator | 2026-01-05 02:32:35.296806 | orchestrator | + echo 2026-01-05 02:32:35.296848 | orchestrator | + echo '# Status of MariaDB' 2026-01-05 02:32:35.296859 | orchestrator | + echo 2026-01-05 02:32:35.297411 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-01-05 02:32:35.348862 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-05 02:32:35.349022 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-01-05 02:32:35.349056 | orchestrator | + MARIADB_USER=root_shard_0 2026-01-05 02:32:35.349087 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2026-01-05 02:32:35.426417 | orchestrator | Reading package lists... 2026-01-05 02:32:35.813180 | orchestrator | Building dependency tree... 2026-01-05 02:32:35.813991 | orchestrator | Reading state information... 2026-01-05 02:32:36.277052 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2026-01-05 02:32:36.277152 | orchestrator | bc set to manually installed. 2026-01-05 02:32:36.277163 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2026-01-05 02:32:36.938971 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2026-01-05 02:32:36.939633 | orchestrator | 2026-01-05 02:32:36.939714 | orchestrator | # Status of Prometheus 2026-01-05 02:32:36.939734 | orchestrator | 2026-01-05 02:32:36.939749 | orchestrator | + echo 2026-01-05 02:32:36.939761 | orchestrator | + echo '# Status of Prometheus' 2026-01-05 02:32:36.939770 | orchestrator | + echo 2026-01-05 02:32:36.939779 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-01-05 02:32:37.030734 | orchestrator | Unauthorized 2026-01-05 02:32:37.034632 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-01-05 02:32:37.087513 | orchestrator | Unauthorized 2026-01-05 02:32:37.090696 | orchestrator | 2026-01-05 02:32:37.090759 | orchestrator | # Status of RabbitMQ 2026-01-05 02:32:37.090765 | orchestrator | 2026-01-05 02:32:37.090770 | orchestrator | + echo 2026-01-05 02:32:37.090775 | orchestrator | + echo '# Status of RabbitMQ' 2026-01-05 02:32:37.090779 | orchestrator | + echo 2026-01-05 02:32:37.092166 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-01-05 02:32:37.152305 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-05 02:32:37.152395 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-01-05 02:32:37.152406 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2026-01-05 02:32:37.677346 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2026-01-05 02:32:37.687972 | orchestrator | 2026-01-05 02:32:37.688080 | orchestrator | # Status of Redis 2026-01-05 02:32:37.688098 | orchestrator | 2026-01-05 02:32:37.688113 | orchestrator | + echo 2026-01-05 02:32:37.688127 | orchestrator | + echo '# Status of Redis' 2026-01-05 02:32:37.688142 | orchestrator | + echo 2026-01-05 02:32:37.688157 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-01-05 02:32:37.696606 | orchestrator | TCP OK - 0.003 second response time on 192.168.16.10 port 6379|time=0.003053s;;;0.000000;10.000000 2026-01-05 02:32:37.696717 | orchestrator | 2026-01-05 02:32:37.696738 | orchestrator | # Create backup of MariaDB database 2026-01-05 02:32:37.696757 | orchestrator | 2026-01-05 02:32:37.696774 | orchestrator | + popd 2026-01-05 02:32:37.696791 | orchestrator | + echo 2026-01-05 02:32:37.696808 | orchestrator | + echo '# Create backup of MariaDB database' 2026-01-05 02:32:37.696825 | orchestrator | + echo 2026-01-05 02:32:37.696842 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-01-05 02:32:39.731923 | orchestrator | 2026-01-05 02:32:39 | INFO  | Task d1df2ece-b9a3-4947-8d12-63a4a60a10c5 (mariadb_backup) was prepared for execution. 2026-01-05 02:32:39.732014 | orchestrator | 2026-01-05 02:32:39 | INFO  | It takes a moment until task d1df2ece-b9a3-4947-8d12-63a4a60a10c5 (mariadb_backup) has been started and output is visible here. 2026-01-05 02:34:08.980138 | orchestrator | 2026-01-05 02:34:08.980228 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 02:34:08.980237 | orchestrator | 2026-01-05 02:34:08.980243 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 02:34:08.980249 | orchestrator | Monday 05 January 2026 02:32:43 +0000 (0:00:00.187) 0:00:00.187 ******** 2026-01-05 02:34:08.980254 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:34:08.980260 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:34:08.980265 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:34:08.980270 | orchestrator | 2026-01-05 02:34:08.980291 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 02:34:08.980296 | orchestrator | Monday 05 January 2026 02:32:44 +0000 (0:00:00.320) 0:00:00.507 ******** 2026-01-05 02:34:08.980301 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-01-05 02:34:08.980307 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-01-05 02:34:08.980312 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-01-05 02:34:08.980316 | orchestrator | 2026-01-05 02:34:08.980321 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-01-05 02:34:08.980326 | orchestrator | 2026-01-05 02:34:08.980331 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-01-05 02:34:08.980335 | orchestrator | Monday 05 January 2026 02:32:44 +0000 (0:00:00.615) 0:00:01.123 ******** 2026-01-05 02:34:08.980340 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-05 02:34:08.980345 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-01-05 02:34:08.980350 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-01-05 02:34:08.980355 | orchestrator | 2026-01-05 02:34:08.980359 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-05 02:34:08.980364 | orchestrator | Monday 05 January 2026 02:32:45 +0000 (0:00:00.435) 0:00:01.559 ******** 2026-01-05 02:34:08.980369 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 02:34:08.980376 | orchestrator | 2026-01-05 02:34:08.980381 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-01-05 02:34:08.980396 | orchestrator | Monday 05 January 2026 02:32:45 +0000 (0:00:00.579) 0:00:02.138 ******** 2026-01-05 02:34:08.980467 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:34:08.980473 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:34:08.980478 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:34:08.980482 | orchestrator | 2026-01-05 02:34:08.980487 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-01-05 02:34:08.980492 | orchestrator | Monday 05 January 2026 02:32:49 +0000 (0:00:03.613) 0:00:05.751 ******** 2026-01-05 02:34:08.980496 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-01-05 02:34:08.980501 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-01-05 02:34:08.980507 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-01-05 02:34:08.980512 | orchestrator | mariadb_bootstrap_restart 2026-01-05 02:34:08.980517 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:34:08.980521 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:34:08.980526 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:34:08.980531 | orchestrator | 2026-01-05 02:34:08.980535 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-01-05 02:34:08.980540 | orchestrator | skipping: no hosts matched 2026-01-05 02:34:08.980545 | orchestrator | 2026-01-05 02:34:08.980549 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-01-05 02:34:08.980554 | orchestrator | skipping: no hosts matched 2026-01-05 02:34:08.980559 | orchestrator | 2026-01-05 02:34:08.980563 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-01-05 02:34:08.980568 | orchestrator | skipping: no hosts matched 2026-01-05 02:34:08.980573 | orchestrator | 2026-01-05 02:34:08.980577 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-01-05 02:34:08.980582 | orchestrator | 2026-01-05 02:34:08.980587 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-01-05 02:34:08.980591 | orchestrator | Monday 05 January 2026 02:34:07 +0000 (0:01:18.370) 0:01:24.122 ******** 2026-01-05 02:34:08.980596 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:34:08.980600 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:34:08.980605 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:34:08.980610 | orchestrator | 2026-01-05 02:34:08.980614 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-01-05 02:34:08.980625 | orchestrator | Monday 05 January 2026 02:34:08 +0000 (0:00:00.327) 0:01:24.450 ******** 2026-01-05 02:34:08.980630 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:34:08.980634 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:34:08.980639 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:34:08.980643 | orchestrator | 2026-01-05 02:34:08.980648 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 02:34:08.980654 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 02:34:08.980660 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-05 02:34:08.980665 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-05 02:34:08.980670 | orchestrator | 2026-01-05 02:34:08.980674 | orchestrator | 2026-01-05 02:34:08.980679 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 02:34:08.980684 | orchestrator | Monday 05 January 2026 02:34:08 +0000 (0:00:00.411) 0:01:24.861 ******** 2026-01-05 02:34:08.980688 | orchestrator | =============================================================================== 2026-01-05 02:34:08.980693 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 78.37s 2026-01-05 02:34:08.980711 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.61s 2026-01-05 02:34:08.980717 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.62s 2026-01-05 02:34:08.980723 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.58s 2026-01-05 02:34:08.980728 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.44s 2026-01-05 02:34:08.980733 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.41s 2026-01-05 02:34:08.980739 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.33s 2026-01-05 02:34:08.980744 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2026-01-05 02:34:09.314445 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-01-05 02:34:09.320553 | orchestrator | + set -e 2026-01-05 02:34:09.320640 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-05 02:34:09.321516 | orchestrator | ++ export INTERACTIVE=false 2026-01-05 02:34:09.321538 | orchestrator | ++ INTERACTIVE=false 2026-01-05 02:34:09.321546 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-05 02:34:09.321553 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-05 02:34:09.321607 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-01-05 02:34:09.323885 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-01-05 02:34:09.330617 | orchestrator | 2026-01-05 02:34:09.330695 | orchestrator | # OpenStack endpoints 2026-01-05 02:34:09.330704 | orchestrator | 2026-01-05 02:34:09.330709 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-01-05 02:34:09.330713 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-01-05 02:34:09.330718 | orchestrator | + export OS_CLOUD=admin 2026-01-05 02:34:09.330722 | orchestrator | + OS_CLOUD=admin 2026-01-05 02:34:09.330726 | orchestrator | + echo 2026-01-05 02:34:09.330731 | orchestrator | + echo '# OpenStack endpoints' 2026-01-05 02:34:09.330735 | orchestrator | + echo 2026-01-05 02:34:09.330739 | orchestrator | + openstack endpoint list 2026-01-05 02:34:12.610308 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-01-05 02:34:12.610394 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-01-05 02:34:12.610404 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-01-05 02:34:12.610462 | orchestrator | | 0050e1a6722b4352ba3687b7a5b5e402 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-01-05 02:34:12.610509 | orchestrator | | 0494725ebcf0497382ce0a7957fb0d3a | RegionOne | aodh | alarming | True | public | https://api.testbed.osism.xyz:8042 | 2026-01-05 02:34:12.610517 | orchestrator | | 0b913032c76b49eb82302dbc4e820176 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-01-05 02:34:12.610525 | orchestrator | | 2123befefc6843b987646a50f455aa5a | RegionOne | skyline | panel | True | public | https://api.testbed.osism.xyz:9998 | 2026-01-05 02:34:12.610532 | orchestrator | | 350ad891e0044b02b8b6322010031239 | RegionOne | manilav2 | sharev2 | True | internal | https://api-int.testbed.osism.xyz:8786/v2 | 2026-01-05 02:34:12.610539 | orchestrator | | 37020d1217ad49558fd089600665d5f6 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-01-05 02:34:12.610547 | orchestrator | | 377ae7e8ffd649b4ae3fcab717f59c30 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-01-05 02:34:12.610554 | orchestrator | | 3817a3725d8c4a2cb710d870a5d9ac36 | RegionOne | manila | share | True | public | https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-01-05 02:34:12.610561 | orchestrator | | 6404883c24ba4a7aa181e92a9a0524ad | RegionOne | skyline | panel | True | internal | https://api-int.testbed.osism.xyz:9998 | 2026-01-05 02:34:12.610568 | orchestrator | | 6e25fa856ed549dfa80101a31c75600e | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-01-05 02:34:12.610575 | orchestrator | | 79e3230b8844421d8a4fbf4cdb654f35 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-01-05 02:34:12.610582 | orchestrator | | 82e2e88724854a44ad476500f1b9b850 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-01-05 02:34:12.610589 | orchestrator | | 854f03ec465040e985922ccefeb23549 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-01-05 02:34:12.610596 | orchestrator | | 8a1020f6c4714cf3b844a411619bf73c | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-01-05 02:34:12.610604 | orchestrator | | 903928e27d1a458c966e781ed9298a87 | RegionOne | manilav2 | sharev2 | True | public | https://api.testbed.osism.xyz:8786/v2 | 2026-01-05 02:34:12.610612 | orchestrator | | a071b506d4594e3883bc95886a1c8acb | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-01-05 02:34:12.610620 | orchestrator | | a3dc23bcdc7d417ba9d8289b2312f9b3 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-01-05 02:34:12.610627 | orchestrator | | a5df32365d52414eb7ce3f9c8a7faaa9 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-01-05 02:34:12.610632 | orchestrator | | a5e194f8bec94939827974f3836a5415 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-01-05 02:34:12.610637 | orchestrator | | b756afae673c4686801a2c40a9153229 | RegionOne | aodh | alarming | True | internal | https://api-int.testbed.osism.xyz:8042 | 2026-01-05 02:34:12.610656 | orchestrator | | b94595fa652b468cb64c36da73a60b23 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-01-05 02:34:12.610674 | orchestrator | | bf9285c58e3e4e63a800fb17912dd03d | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-01-05 02:34:12.610687 | orchestrator | | d83f0c607f5442d3a826360e430b779f | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-01-05 02:34:12.610696 | orchestrator | | dd2a2700e52f4ed9a2df9db997baf24d | RegionOne | manila | share | True | internal | https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-01-05 02:34:12.610705 | orchestrator | | e7851e65e3dd44ecb81cd05c7ca70eae | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-01-05 02:34:12.610709 | orchestrator | | e94b5231b0b14ad68dd3ac36365e69f6 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-01-05 02:34:12.610714 | orchestrator | | ea0df8fd4020423c9697ffb4ccd5a38c | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-01-05 02:34:12.610719 | orchestrator | | f9ee77a42afd4445927f853595f04a9d | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-01-05 02:34:12.610724 | orchestrator | | fce2489622754c4fa3cf6279d50de810 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-01-05 02:34:12.610728 | orchestrator | | fdcf0111c26b43eb82edc656a52836ed | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-01-05 02:34:12.610733 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-01-05 02:34:12.855951 | orchestrator | 2026-01-05 02:34:12.856029 | orchestrator | # Cinder 2026-01-05 02:34:12.856036 | orchestrator | 2026-01-05 02:34:12.856040 | orchestrator | + echo 2026-01-05 02:34:12.856045 | orchestrator | + echo '# Cinder' 2026-01-05 02:34:12.856050 | orchestrator | + echo 2026-01-05 02:34:12.856054 | orchestrator | + openstack volume service list 2026-01-05 02:34:15.553256 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-01-05 02:34:15.553350 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-01-05 02:34:15.553359 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-01-05 02:34:15.553366 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-01-05T02:34:08.000000 | 2026-01-05 02:34:15.553372 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-01-05T02:34:08.000000 | 2026-01-05 02:34:15.553379 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-01-05T02:34:08.000000 | 2026-01-05 02:34:15.553385 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-01-05T02:34:07.000000 | 2026-01-05 02:34:15.553392 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-01-05T02:34:15.000000 | 2026-01-05 02:34:15.553398 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-01-05T02:34:05.000000 | 2026-01-05 02:34:15.553405 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-01-05T02:34:12.000000 | 2026-01-05 02:34:15.553465 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-01-05T02:34:14.000000 | 2026-01-05 02:34:15.553472 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-01-05T02:34:15.000000 | 2026-01-05 02:34:15.553498 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-01-05 02:34:15.819931 | orchestrator | 2026-01-05 02:34:15.820065 | orchestrator | # Neutron 2026-01-05 02:34:15.820089 | orchestrator | 2026-01-05 02:34:15.820103 | orchestrator | + echo 2026-01-05 02:34:15.820114 | orchestrator | + echo '# Neutron' 2026-01-05 02:34:15.820127 | orchestrator | + echo 2026-01-05 02:34:15.820138 | orchestrator | + openstack network agent list 2026-01-05 02:34:18.483207 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-01-05 02:34:18.483309 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-01-05 02:34:18.483315 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-01-05 02:34:18.483320 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-01-05 02:34:18.483324 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-01-05 02:34:18.483328 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-01-05 02:34:18.483332 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-01-05 02:34:18.483349 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-01-05 02:34:18.483353 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-01-05 02:34:18.483357 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-01-05 02:34:18.483361 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-01-05 02:34:18.483365 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-01-05 02:34:18.483368 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-01-05 02:34:18.759654 | orchestrator | + openstack network service provider list 2026-01-05 02:34:21.301403 | orchestrator | +---------------+------+---------+ 2026-01-05 02:34:21.301522 | orchestrator | | Service Type | Name | Default | 2026-01-05 02:34:21.301529 | orchestrator | +---------------+------+---------+ 2026-01-05 02:34:21.301534 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-01-05 02:34:21.301540 | orchestrator | +---------------+------+---------+ 2026-01-05 02:34:21.586505 | orchestrator | 2026-01-05 02:34:21.586585 | orchestrator | # Nova 2026-01-05 02:34:21.586592 | orchestrator | 2026-01-05 02:34:21.586597 | orchestrator | + echo 2026-01-05 02:34:21.586601 | orchestrator | + echo '# Nova' 2026-01-05 02:34:21.586605 | orchestrator | + echo 2026-01-05 02:34:21.586609 | orchestrator | + openstack compute service list 2026-01-05 02:34:25.116673 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-01-05 02:34:25.116766 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-01-05 02:34:25.116777 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-01-05 02:34:25.116784 | orchestrator | | 16d1a154-babd-4a68-ad4c-ddc82b48956d | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-01-05T02:34:19.000000 | 2026-01-05 02:34:25.116823 | orchestrator | | 48e30501-9961-4b25-b31e-d60c3faf0923 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-01-05T02:34:22.000000 | 2026-01-05 02:34:25.116831 | orchestrator | | 8d9c9ff1-c9ba-44c7-9106-a024c3aa4db6 | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-01-05T02:34:22.000000 | 2026-01-05 02:34:25.116837 | orchestrator | | 94e90d92-f6f8-4b8e-b2b6-c70c214a7d76 | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-01-05T02:34:20.000000 | 2026-01-05 02:34:25.116844 | orchestrator | | 2954fc9f-3f43-476a-b84f-fbc7e0fe3e99 | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-01-05T02:34:22.000000 | 2026-01-05 02:34:25.116851 | orchestrator | | 4dd2cd6b-43c2-41ef-b8c9-f315d6c2ed24 | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-01-05T02:34:22.000000 | 2026-01-05 02:34:25.116858 | orchestrator | | 81b8f90a-c457-4220-839a-d15b0f7084f4 | nova-compute | testbed-node-4 | nova | enabled | up | 2026-01-05T02:34:19.000000 | 2026-01-05 02:34:25.116864 | orchestrator | | 7502768a-38ce-40e5-af51-71ec8cc66113 | nova-compute | testbed-node-3 | nova | enabled | up | 2026-01-05T02:34:19.000000 | 2026-01-05 02:34:25.116868 | orchestrator | | f3cef70f-58b2-4e13-9769-ead136137f2a | nova-compute | testbed-node-5 | nova | enabled | up | 2026-01-05T02:34:19.000000 | 2026-01-05 02:34:25.116871 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-01-05 02:34:25.445285 | orchestrator | + openstack hypervisor list 2026-01-05 02:34:28.337965 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-01-05 02:34:28.339184 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-01-05 02:34:28.339262 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-01-05 02:34:28.339277 | orchestrator | | b0e72c2e-fbb4-47b8-8f5d-3954fa6956fa | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-01-05 02:34:28.339288 | orchestrator | | ced3e828-9d41-45da-965e-a3fc8980a320 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-01-05 02:34:28.339299 | orchestrator | | 1b10e7f1-525b-488b-88b5-f88bee5b02a1 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-01-05 02:34:28.339311 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-01-05 02:34:28.622917 | orchestrator | 2026-01-05 02:34:28.623009 | orchestrator | # Run OpenStack test play 2026-01-05 02:34:28.623022 | orchestrator | 2026-01-05 02:34:28.623036 | orchestrator | + echo 2026-01-05 02:34:28.623045 | orchestrator | + echo '# Run OpenStack test play' 2026-01-05 02:34:28.623055 | orchestrator | + echo 2026-01-05 02:34:28.623064 | orchestrator | + osism apply --environment openstack test 2026-01-05 02:34:30.655983 | orchestrator | 2026-01-05 02:34:30 | INFO  | Trying to run play test in environment openstack 2026-01-05 02:34:40.747693 | orchestrator | 2026-01-05 02:34:40 | INFO  | Task 24ca1a54-b9e8-4ba3-aa22-22be1bab3671 (test) was prepared for execution. 2026-01-05 02:34:40.747826 | orchestrator | 2026-01-05 02:34:40 | INFO  | It takes a moment until task 24ca1a54-b9e8-4ba3-aa22-22be1bab3671 (test) has been started and output is visible here. 2026-01-05 02:42:09.357933 | orchestrator | 2026-01-05 02:42:09.358144 | orchestrator | PLAY [Create test project] ***************************************************** 2026-01-05 02:42:09.358172 | orchestrator | 2026-01-05 02:42:09.358183 | orchestrator | TASK [Create test domain] ****************************************************** 2026-01-05 02:42:09.358194 | orchestrator | Monday 05 January 2026 02:34:45 +0000 (0:00:00.075) 0:00:00.075 ******** 2026-01-05 02:42:09.358201 | orchestrator | changed: [localhost] 2026-01-05 02:42:09.358209 | orchestrator | 2026-01-05 02:42:09.358217 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-01-05 02:42:09.358224 | orchestrator | Monday 05 January 2026 02:34:48 +0000 (0:00:03.779) 0:00:03.855 ******** 2026-01-05 02:42:09.358231 | orchestrator | changed: [localhost] 2026-01-05 02:42:09.358237 | orchestrator | 2026-01-05 02:42:09.358265 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-01-05 02:42:09.358273 | orchestrator | Monday 05 January 2026 02:34:52 +0000 (0:00:04.173) 0:00:08.029 ******** 2026-01-05 02:42:09.358279 | orchestrator | changed: [localhost] 2026-01-05 02:42:09.358285 | orchestrator | 2026-01-05 02:42:09.358291 | orchestrator | TASK [Create test project] ***************************************************** 2026-01-05 02:42:09.358298 | orchestrator | Monday 05 January 2026 02:34:59 +0000 (0:00:06.627) 0:00:14.656 ******** 2026-01-05 02:42:09.358306 | orchestrator | changed: [localhost] 2026-01-05 02:42:09.358312 | orchestrator | 2026-01-05 02:42:09.358319 | orchestrator | TASK [Create test user] ******************************************************** 2026-01-05 02:42:09.358326 | orchestrator | Monday 05 January 2026 02:35:03 +0000 (0:00:04.281) 0:00:18.938 ******** 2026-01-05 02:42:09.358333 | orchestrator | changed: [localhost] 2026-01-05 02:42:09.358340 | orchestrator | 2026-01-05 02:42:09.358346 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-01-05 02:42:09.358353 | orchestrator | Monday 05 January 2026 02:35:08 +0000 (0:00:04.319) 0:00:23.257 ******** 2026-01-05 02:42:09.358361 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-01-05 02:42:09.358370 | orchestrator | changed: [localhost] => (item=member) 2026-01-05 02:42:09.358378 | orchestrator | changed: [localhost] => (item=creator) 2026-01-05 02:42:09.358385 | orchestrator | 2026-01-05 02:42:09.358393 | orchestrator | TASK [Create test server group] ************************************************ 2026-01-05 02:42:09.358398 | orchestrator | Monday 05 January 2026 02:35:19 +0000 (0:00:11.545) 0:00:34.802 ******** 2026-01-05 02:42:09.358402 | orchestrator | changed: [localhost] 2026-01-05 02:42:09.358406 | orchestrator | 2026-01-05 02:42:09.358411 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-01-05 02:42:09.358415 | orchestrator | Monday 05 January 2026 02:35:23 +0000 (0:00:04.222) 0:00:39.025 ******** 2026-01-05 02:42:09.358419 | orchestrator | changed: [localhost] 2026-01-05 02:42:09.358423 | orchestrator | 2026-01-05 02:42:09.358428 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-01-05 02:42:09.358432 | orchestrator | Monday 05 January 2026 02:35:28 +0000 (0:00:04.871) 0:00:43.897 ******** 2026-01-05 02:42:09.358436 | orchestrator | changed: [localhost] 2026-01-05 02:42:09.358441 | orchestrator | 2026-01-05 02:42:09.358445 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-01-05 02:42:09.358449 | orchestrator | Monday 05 January 2026 02:35:33 +0000 (0:00:04.229) 0:00:48.127 ******** 2026-01-05 02:42:09.358454 | orchestrator | changed: [localhost] 2026-01-05 02:42:09.358458 | orchestrator | 2026-01-05 02:42:09.358462 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-01-05 02:42:09.358466 | orchestrator | Monday 05 January 2026 02:35:37 +0000 (0:00:04.312) 0:00:52.439 ******** 2026-01-05 02:42:09.358471 | orchestrator | changed: [localhost] 2026-01-05 02:42:09.358475 | orchestrator | 2026-01-05 02:42:09.358479 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-01-05 02:42:09.358485 | orchestrator | Monday 05 January 2026 02:35:41 +0000 (0:00:04.258) 0:00:56.697 ******** 2026-01-05 02:42:09.358489 | orchestrator | changed: [localhost] 2026-01-05 02:42:09.358494 | orchestrator | 2026-01-05 02:42:09.358501 | orchestrator | TASK [Create test network] ***************************************************** 2026-01-05 02:42:09.358508 | orchestrator | Monday 05 January 2026 02:35:45 +0000 (0:00:04.002) 0:01:00.700 ******** 2026-01-05 02:42:09.358517 | orchestrator | changed: [localhost] 2026-01-05 02:42:09.358526 | orchestrator | 2026-01-05 02:42:09.358533 | orchestrator | TASK [Create test subnet] ****************************************************** 2026-01-05 02:42:09.358540 | orchestrator | Monday 05 January 2026 02:35:50 +0000 (0:00:05.092) 0:01:05.792 ******** 2026-01-05 02:42:09.358547 | orchestrator | changed: [localhost] 2026-01-05 02:42:09.358554 | orchestrator | 2026-01-05 02:42:09.358560 | orchestrator | TASK [Create test router] ****************************************************** 2026-01-05 02:42:09.358566 | orchestrator | Monday 05 January 2026 02:35:56 +0000 (0:00:05.630) 0:01:11.423 ******** 2026-01-05 02:42:09.358580 | orchestrator | changed: [localhost] 2026-01-05 02:42:09.358586 | orchestrator | 2026-01-05 02:42:09.358593 | orchestrator | TASK [Create test instances] *************************************************** 2026-01-05 02:42:09.358600 | orchestrator | Monday 05 January 2026 02:36:08 +0000 (0:00:11.645) 0:01:23.069 ******** 2026-01-05 02:42:09.358607 | orchestrator | changed: [localhost] => (item=test) 2026-01-05 02:42:09.358615 | orchestrator | changed: [localhost] => (item=test-1) 2026-01-05 02:42:09.358621 | orchestrator | 2026-01-05 02:42:09.358627 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2026-01-05 02:42:09.358634 | orchestrator | 2026-01-05 02:42:09.358641 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2026-01-05 02:42:09.358648 | orchestrator | changed: [localhost] => (item=test-2) 2026-01-05 02:42:09.358654 | orchestrator | 2026-01-05 02:42:09.358660 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2026-01-05 02:42:09.358666 | orchestrator | 2026-01-05 02:42:09.358672 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2026-01-05 02:42:09.358679 | orchestrator | changed: [localhost] => (item=test-3) 2026-01-05 02:42:09.358685 | orchestrator | 2026-01-05 02:42:09.358691 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2026-01-05 02:42:09.358698 | orchestrator | 2026-01-05 02:42:09.358719 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2026-01-05 02:42:09.358727 | orchestrator | changed: [localhost] => (item=test-4) 2026-01-05 02:42:09.358734 | orchestrator | 2026-01-05 02:42:09.358762 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-01-05 02:42:09.358770 | orchestrator | Monday 05 January 2026 02:40:45 +0000 (0:04:37.148) 0:06:00.217 ******** 2026-01-05 02:42:09.358777 | orchestrator | changed: [localhost] => (item=test) 2026-01-05 02:42:09.358783 | orchestrator | changed: [localhost] => (item=test-1) 2026-01-05 02:42:09.358790 | orchestrator | changed: [localhost] => (item=test-2) 2026-01-05 02:42:09.358808 | orchestrator | changed: [localhost] => (item=test-3) 2026-01-05 02:42:09.358815 | orchestrator | changed: [localhost] => (item=test-4) 2026-01-05 02:42:09.358822 | orchestrator | 2026-01-05 02:42:09.358828 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-01-05 02:42:09.358835 | orchestrator | Monday 05 January 2026 02:41:08 +0000 (0:00:23.799) 0:06:24.017 ******** 2026-01-05 02:42:09.358842 | orchestrator | changed: [localhost] => (item=test) 2026-01-05 02:42:09.358849 | orchestrator | changed: [localhost] => (item=test-1) 2026-01-05 02:42:09.358856 | orchestrator | changed: [localhost] => (item=test-2) 2026-01-05 02:42:09.358863 | orchestrator | changed: [localhost] => (item=test-3) 2026-01-05 02:42:09.358869 | orchestrator | changed: [localhost] => (item=test-4) 2026-01-05 02:42:09.358876 | orchestrator | 2026-01-05 02:42:09.358884 | orchestrator | TASK [Create test volume] ****************************************************** 2026-01-05 02:42:09.358890 | orchestrator | Monday 05 January 2026 02:41:43 +0000 (0:00:34.887) 0:06:58.904 ******** 2026-01-05 02:42:09.358897 | orchestrator | changed: [localhost] 2026-01-05 02:42:09.358904 | orchestrator | 2026-01-05 02:42:09.358910 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-01-05 02:42:09.358916 | orchestrator | Monday 05 January 2026 02:41:50 +0000 (0:00:06.373) 0:07:05.278 ******** 2026-01-05 02:42:09.358923 | orchestrator | changed: [localhost] 2026-01-05 02:42:09.358930 | orchestrator | 2026-01-05 02:42:09.358937 | orchestrator | TASK [Create floating ip address] ********************************************** 2026-01-05 02:42:09.358944 | orchestrator | Monday 05 January 2026 02:42:03 +0000 (0:00:13.736) 0:07:19.014 ******** 2026-01-05 02:42:09.358951 | orchestrator | ok: [localhost] 2026-01-05 02:42:09.358958 | orchestrator | 2026-01-05 02:42:09.358965 | orchestrator | TASK [Print floating ip address] *********************************************** 2026-01-05 02:42:09.358971 | orchestrator | Monday 05 January 2026 02:42:09 +0000 (0:00:05.090) 0:07:24.105 ******** 2026-01-05 02:42:09.358978 | orchestrator | ok: [localhost] => { 2026-01-05 02:42:09.358993 | orchestrator |  "msg": "192.168.112.181" 2026-01-05 02:42:09.359001 | orchestrator | } 2026-01-05 02:42:09.359006 | orchestrator | 2026-01-05 02:42:09.359010 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 02:42:09.359015 | orchestrator | localhost : ok=22  changed=20  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 02:42:09.359021 | orchestrator | 2026-01-05 02:42:09.359025 | orchestrator | 2026-01-05 02:42:09.359029 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 02:42:09.359033 | orchestrator | Monday 05 January 2026 02:42:09 +0000 (0:00:00.038) 0:07:24.144 ******** 2026-01-05 02:42:09.359038 | orchestrator | =============================================================================== 2026-01-05 02:42:09.359042 | orchestrator | Create test instances ------------------------------------------------- 277.15s 2026-01-05 02:42:09.359046 | orchestrator | Add tag to instances --------------------------------------------------- 34.89s 2026-01-05 02:42:09.359050 | orchestrator | Add metadata to instances ---------------------------------------------- 23.80s 2026-01-05 02:42:09.359054 | orchestrator | Attach test volume ----------------------------------------------------- 13.74s 2026-01-05 02:42:09.359059 | orchestrator | Create test router ----------------------------------------------------- 11.65s 2026-01-05 02:42:09.359063 | orchestrator | Add member roles to user test ------------------------------------------ 11.55s 2026-01-05 02:42:09.359067 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.63s 2026-01-05 02:42:09.359071 | orchestrator | Create test volume ------------------------------------------------------ 6.37s 2026-01-05 02:42:09.359075 | orchestrator | Create test subnet ------------------------------------------------------ 5.63s 2026-01-05 02:42:09.359080 | orchestrator | Create test network ----------------------------------------------------- 5.09s 2026-01-05 02:42:09.359084 | orchestrator | Create floating ip address ---------------------------------------------- 5.09s 2026-01-05 02:42:09.359088 | orchestrator | Create ssh security group ----------------------------------------------- 4.87s 2026-01-05 02:42:09.359092 | orchestrator | Create test user -------------------------------------------------------- 4.32s 2026-01-05 02:42:09.359096 | orchestrator | Create icmp security group ---------------------------------------------- 4.31s 2026-01-05 02:42:09.359118 | orchestrator | Create test project ----------------------------------------------------- 4.28s 2026-01-05 02:42:09.359123 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.26s 2026-01-05 02:42:09.359128 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.23s 2026-01-05 02:42:09.359132 | orchestrator | Create test server group ------------------------------------------------ 4.22s 2026-01-05 02:42:09.359136 | orchestrator | Create test-admin user -------------------------------------------------- 4.17s 2026-01-05 02:42:09.359140 | orchestrator | Create test keypair ----------------------------------------------------- 4.00s 2026-01-05 02:42:09.702682 | orchestrator | + server_list 2026-01-05 02:42:09.702803 | orchestrator | + openstack --os-cloud test server list 2026-01-05 02:42:13.934511 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-01-05 02:42:13.934589 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-01-05 02:42:13.934595 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-01-05 02:42:13.934616 | orchestrator | | 842aef44-da80-4311-84e6-04f34b5a9e46 | test-4 | ACTIVE | test=192.168.112.194, 192.168.200.143 | N/A (booted from volume) | SCS-1L-1 | 2026-01-05 02:42:13.934621 | orchestrator | | f1fb3a9a-64c1-48ba-accb-be0814e3a80e | test-3 | ACTIVE | test=192.168.112.186, 192.168.200.63 | N/A (booted from volume) | SCS-1L-1 | 2026-01-05 02:42:13.934626 | orchestrator | | eeaf303a-bcef-41c6-9831-a98f064aa977 | test-2 | ACTIVE | test=192.168.112.162, 192.168.200.128 | N/A (booted from volume) | SCS-1L-1 | 2026-01-05 02:42:13.934645 | orchestrator | | 019e2883-6cd9-477a-a030-40e8a40ed808 | test-1 | ACTIVE | test=192.168.112.105, 192.168.200.85 | N/A (booted from volume) | SCS-1L-1 | 2026-01-05 02:42:13.934650 | orchestrator | | 2fc72a07-3cec-4ee6-ba3f-b439e6dc3ffa | test | ACTIVE | test=192.168.112.181, 192.168.200.29 | N/A (booted from volume) | SCS-1L-1 | 2026-01-05 02:42:13.934654 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-01-05 02:42:14.213271 | orchestrator | + openstack --os-cloud test server show test 2026-01-05 02:42:17.998971 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-05 02:42:17.999077 | orchestrator | | Field | Value | 2026-01-05 02:42:17.999090 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-05 02:42:17.999100 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-01-05 02:42:17.999109 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-01-05 02:42:17.999183 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-01-05 02:42:17.999192 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-01-05 02:42:17.999212 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-01-05 02:42:17.999244 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-01-05 02:42:17.999272 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-01-05 02:42:17.999281 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-01-05 02:42:17.999289 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-01-05 02:42:17.999297 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-01-05 02:42:17.999304 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-01-05 02:42:17.999311 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-01-05 02:42:17.999319 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-01-05 02:42:17.999327 | orchestrator | | OS-EXT-STS:task_state | None | 2026-01-05 02:42:17.999339 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-01-05 02:42:17.999355 | orchestrator | | OS-SRV-USG:launched_at | 2026-01-05T02:36:55.000000 | 2026-01-05 02:42:17.999371 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-01-05 02:42:17.999379 | orchestrator | | accessIPv4 | | 2026-01-05 02:42:17.999387 | orchestrator | | accessIPv6 | | 2026-01-05 02:42:17.999395 | orchestrator | | addresses | test=192.168.112.181, 192.168.200.29 | 2026-01-05 02:42:17.999402 | orchestrator | | config_drive | | 2026-01-05 02:42:17.999410 | orchestrator | | created | 2026-01-05T02:36:16Z | 2026-01-05 02:42:17.999418 | orchestrator | | description | None | 2026-01-05 02:42:17.999426 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-01-05 02:42:17.999446 | orchestrator | | hostId | 9d91f20d305db9ff9228681cc6906328f02e1dc3326a0d2941b7a908 | 2026-01-05 02:42:17.999456 | orchestrator | | host_status | None | 2026-01-05 02:42:17.999471 | orchestrator | | id | 2fc72a07-3cec-4ee6-ba3f-b439e6dc3ffa | 2026-01-05 02:42:17.999480 | orchestrator | | image | N/A (booted from volume) | 2026-01-05 02:42:17.999489 | orchestrator | | key_name | test | 2026-01-05 02:42:17.999498 | orchestrator | | locked | False | 2026-01-05 02:42:17.999507 | orchestrator | | locked_reason | None | 2026-01-05 02:42:17.999517 | orchestrator | | name | test | 2026-01-05 02:42:17.999525 | orchestrator | | pinned_availability_zone | None | 2026-01-05 02:42:17.999543 | orchestrator | | progress | 0 | 2026-01-05 02:42:17.999554 | orchestrator | | project_id | e70b22e2a92c4ca7a6078fa0202c5d0d | 2026-01-05 02:42:17.999563 | orchestrator | | properties | hostname='test' | 2026-01-05 02:42:17.999578 | orchestrator | | security_groups | name='ssh' | 2026-01-05 02:42:17.999587 | orchestrator | | | name='icmp' | 2026-01-05 02:42:17.999595 | orchestrator | | server_groups | None | 2026-01-05 02:42:17.999604 | orchestrator | | status | ACTIVE | 2026-01-05 02:42:17.999612 | orchestrator | | tags | test | 2026-01-05 02:42:17.999620 | orchestrator | | trusted_image_certificates | None | 2026-01-05 02:42:17.999632 | orchestrator | | updated | 2026-01-05T02:40:49Z | 2026-01-05 02:42:17.999640 | orchestrator | | user_id | eaf989eafe424c728d22ec6775d22972 | 2026-01-05 02:42:17.999648 | orchestrator | | volumes_attached | delete_on_termination='True', id='4c127ba8-bfe1-49dc-8844-ef6d76fb6d3e' | 2026-01-05 02:42:17.999656 | orchestrator | | | delete_on_termination='False', id='07c96da1-c1bd-437b-8783-2090ba7545eb' | 2026-01-05 02:42:18.003698 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-05 02:42:18.326611 | orchestrator | + openstack --os-cloud test server show test-1 2026-01-05 02:42:21.441175 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-05 02:42:21.441271 | orchestrator | | Field | Value | 2026-01-05 02:42:21.441300 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-05 02:42:21.441307 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-01-05 02:42:21.441328 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-01-05 02:42:21.441333 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-01-05 02:42:21.441340 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-01-05 02:42:21.441344 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-01-05 02:42:21.441348 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-01-05 02:42:21.441365 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-01-05 02:42:21.441370 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-01-05 02:42:21.441374 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-01-05 02:42:21.441378 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-01-05 02:42:21.441382 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-01-05 02:42:21.441390 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-01-05 02:42:21.441394 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-01-05 02:42:21.441403 | orchestrator | | OS-EXT-STS:task_state | None | 2026-01-05 02:42:21.441408 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-01-05 02:42:21.441412 | orchestrator | | OS-SRV-USG:launched_at | 2026-01-05T02:37:54.000000 | 2026-01-05 02:42:21.441420 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-01-05 02:42:21.441424 | orchestrator | | accessIPv4 | | 2026-01-05 02:42:21.441428 | orchestrator | | accessIPv6 | | 2026-01-05 02:42:21.441432 | orchestrator | | addresses | test=192.168.112.105, 192.168.200.85 | 2026-01-05 02:42:21.441440 | orchestrator | | config_drive | | 2026-01-05 02:42:21.441444 | orchestrator | | created | 2026-01-05T02:37:16Z | 2026-01-05 02:42:21.441448 | orchestrator | | description | None | 2026-01-05 02:42:21.441455 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-01-05 02:42:21.441459 | orchestrator | | hostId | 847097df4a6329fd96174be921c2fe797981ad6e0a12aac2c4899ce0 | 2026-01-05 02:42:21.441463 | orchestrator | | host_status | None | 2026-01-05 02:42:21.441471 | orchestrator | | id | 019e2883-6cd9-477a-a030-40e8a40ed808 | 2026-01-05 02:42:21.441475 | orchestrator | | image | N/A (booted from volume) | 2026-01-05 02:42:21.441479 | orchestrator | | key_name | test | 2026-01-05 02:42:21.441486 | orchestrator | | locked | False | 2026-01-05 02:42:21.441490 | orchestrator | | locked_reason | None | 2026-01-05 02:42:21.441494 | orchestrator | | name | test-1 | 2026-01-05 02:42:21.441498 | orchestrator | | pinned_availability_zone | None | 2026-01-05 02:42:21.441505 | orchestrator | | progress | 0 | 2026-01-05 02:42:21.441509 | orchestrator | | project_id | e70b22e2a92c4ca7a6078fa0202c5d0d | 2026-01-05 02:42:21.441513 | orchestrator | | properties | hostname='test-1' | 2026-01-05 02:42:21.441521 | orchestrator | | security_groups | name='ssh' | 2026-01-05 02:42:21.441526 | orchestrator | | | name='icmp' | 2026-01-05 02:42:21.441533 | orchestrator | | server_groups | None | 2026-01-05 02:42:21.441537 | orchestrator | | status | ACTIVE | 2026-01-05 02:42:21.441541 | orchestrator | | tags | test | 2026-01-05 02:42:21.441545 | orchestrator | | trusted_image_certificates | None | 2026-01-05 02:42:21.441549 | orchestrator | | updated | 2026-01-05T02:40:54Z | 2026-01-05 02:42:21.441555 | orchestrator | | user_id | eaf989eafe424c728d22ec6775d22972 | 2026-01-05 02:42:21.441560 | orchestrator | | volumes_attached | delete_on_termination='True', id='e9405766-34da-462a-bb5b-231d3fddbf39' | 2026-01-05 02:42:21.445817 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-05 02:42:21.726554 | orchestrator | + openstack --os-cloud test server show test-2 2026-01-05 02:42:24.716463 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-05 02:42:24.716549 | orchestrator | | Field | Value | 2026-01-05 02:42:24.716573 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-05 02:42:24.716580 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-01-05 02:42:24.716587 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-01-05 02:42:24.716593 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-01-05 02:42:24.716599 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-01-05 02:42:24.716606 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-01-05 02:42:24.716612 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-01-05 02:42:24.716631 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-01-05 02:42:24.716638 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-01-05 02:42:24.716648 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-01-05 02:42:24.716654 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-01-05 02:42:24.716660 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-01-05 02:42:24.716666 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-01-05 02:42:24.716681 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-01-05 02:42:24.716687 | orchestrator | | OS-EXT-STS:task_state | None | 2026-01-05 02:42:24.716696 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-01-05 02:42:24.716703 | orchestrator | | OS-SRV-USG:launched_at | 2026-01-05T02:38:52.000000 | 2026-01-05 02:42:24.716714 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-01-05 02:42:24.716725 | orchestrator | | accessIPv4 | | 2026-01-05 02:42:24.716731 | orchestrator | | accessIPv6 | | 2026-01-05 02:42:24.716737 | orchestrator | | addresses | test=192.168.112.162, 192.168.200.128 | 2026-01-05 02:42:24.716743 | orchestrator | | config_drive | | 2026-01-05 02:42:24.716749 | orchestrator | | created | 2026-01-05T02:38:17Z | 2026-01-05 02:42:24.716756 | orchestrator | | description | None | 2026-01-05 02:42:24.716764 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-01-05 02:42:24.716771 | orchestrator | | hostId | 78e2f5e0416361eb4a68f27b91c8618ceb9e65295c6b0c3e834c6ba5 | 2026-01-05 02:42:24.716777 | orchestrator | | host_status | None | 2026-01-05 02:42:24.716791 | orchestrator | | id | eeaf303a-bcef-41c6-9831-a98f064aa977 | 2026-01-05 02:42:24.716798 | orchestrator | | image | N/A (booted from volume) | 2026-01-05 02:42:24.716804 | orchestrator | | key_name | test | 2026-01-05 02:42:24.716810 | orchestrator | | locked | False | 2026-01-05 02:42:24.716816 | orchestrator | | locked_reason | None | 2026-01-05 02:42:24.716822 | orchestrator | | name | test-2 | 2026-01-05 02:42:24.716828 | orchestrator | | pinned_availability_zone | None | 2026-01-05 02:42:24.716837 | orchestrator | | progress | 0 | 2026-01-05 02:42:24.716843 | orchestrator | | project_id | e70b22e2a92c4ca7a6078fa0202c5d0d | 2026-01-05 02:42:24.716853 | orchestrator | | properties | hostname='test-2' | 2026-01-05 02:42:24.716864 | orchestrator | | security_groups | name='ssh' | 2026-01-05 02:42:24.716871 | orchestrator | | | name='icmp' | 2026-01-05 02:42:24.716877 | orchestrator | | server_groups | None | 2026-01-05 02:42:24.716883 | orchestrator | | status | ACTIVE | 2026-01-05 02:42:24.716889 | orchestrator | | tags | test | 2026-01-05 02:42:24.716895 | orchestrator | | trusted_image_certificates | None | 2026-01-05 02:42:24.716901 | orchestrator | | updated | 2026-01-05T02:40:59Z | 2026-01-05 02:42:24.716910 | orchestrator | | user_id | eaf989eafe424c728d22ec6775d22972 | 2026-01-05 02:42:24.716916 | orchestrator | | volumes_attached | delete_on_termination='True', id='a0526a4a-e724-4f8a-a71f-3184f934199e' | 2026-01-05 02:42:24.719798 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-05 02:42:25.009534 | orchestrator | + openstack --os-cloud test server show test-3 2026-01-05 02:42:28.096624 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-05 02:42:28.096709 | orchestrator | | Field | Value | 2026-01-05 02:42:28.096719 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-05 02:42:28.096727 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-01-05 02:42:28.096734 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-01-05 02:42:28.096740 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-01-05 02:42:28.096747 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-01-05 02:42:28.096768 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-01-05 02:42:28.096793 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-01-05 02:42:28.096815 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-01-05 02:42:28.096822 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-01-05 02:42:28.096829 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-01-05 02:42:28.096835 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-01-05 02:42:28.096842 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-01-05 02:42:28.096848 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-01-05 02:42:28.096855 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-01-05 02:42:28.096862 | orchestrator | | OS-EXT-STS:task_state | None | 2026-01-05 02:42:28.096877 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-01-05 02:42:28.096884 | orchestrator | | OS-SRV-USG:launched_at | 2026-01-05T02:39:41.000000 | 2026-01-05 02:42:28.096896 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-01-05 02:42:28.096903 | orchestrator | | accessIPv4 | | 2026-01-05 02:42:28.096910 | orchestrator | | accessIPv6 | | 2026-01-05 02:42:28.096916 | orchestrator | | addresses | test=192.168.112.186, 192.168.200.63 | 2026-01-05 02:42:28.096923 | orchestrator | | config_drive | | 2026-01-05 02:42:28.096929 | orchestrator | | created | 2026-01-05T02:39:12Z | 2026-01-05 02:42:28.096936 | orchestrator | | description | None | 2026-01-05 02:42:28.096947 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-01-05 02:42:28.096954 | orchestrator | | hostId | 847097df4a6329fd96174be921c2fe797981ad6e0a12aac2c4899ce0 | 2026-01-05 02:42:28.096961 | orchestrator | | host_status | None | 2026-01-05 02:42:28.096973 | orchestrator | | id | f1fb3a9a-64c1-48ba-accb-be0814e3a80e | 2026-01-05 02:42:28.096980 | orchestrator | | image | N/A (booted from volume) | 2026-01-05 02:42:28.096986 | orchestrator | | key_name | test | 2026-01-05 02:42:28.096993 | orchestrator | | locked | False | 2026-01-05 02:42:28.096999 | orchestrator | | locked_reason | None | 2026-01-05 02:42:28.097006 | orchestrator | | name | test-3 | 2026-01-05 02:42:28.097013 | orchestrator | | pinned_availability_zone | None | 2026-01-05 02:42:28.097507 | orchestrator | | progress | 0 | 2026-01-05 02:42:28.097603 | orchestrator | | project_id | e70b22e2a92c4ca7a6078fa0202c5d0d | 2026-01-05 02:42:28.097617 | orchestrator | | properties | hostname='test-3' | 2026-01-05 02:42:28.097642 | orchestrator | | security_groups | name='ssh' | 2026-01-05 02:42:28.097654 | orchestrator | | | name='icmp' | 2026-01-05 02:42:28.097665 | orchestrator | | server_groups | None | 2026-01-05 02:42:28.097676 | orchestrator | | status | ACTIVE | 2026-01-05 02:42:28.097687 | orchestrator | | tags | test | 2026-01-05 02:42:28.097704 | orchestrator | | trusted_image_certificates | None | 2026-01-05 02:42:28.097726 | orchestrator | | updated | 2026-01-05T02:41:04Z | 2026-01-05 02:42:28.097737 | orchestrator | | user_id | eaf989eafe424c728d22ec6775d22972 | 2026-01-05 02:42:28.097748 | orchestrator | | volumes_attached | delete_on_termination='True', id='3bad7bfb-a94f-43f0-ba9c-a35be37e387b' | 2026-01-05 02:42:28.101152 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-05 02:42:28.406245 | orchestrator | + openstack --os-cloud test server show test-4 2026-01-05 02:42:31.900701 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-05 02:42:31.900807 | orchestrator | | Field | Value | 2026-01-05 02:42:31.900821 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-05 02:42:31.900828 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-01-05 02:42:31.900835 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-01-05 02:42:31.900876 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-01-05 02:42:31.900883 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-01-05 02:42:31.900889 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-01-05 02:42:31.900895 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-01-05 02:42:31.900918 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-01-05 02:42:31.900925 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-01-05 02:42:31.900932 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-01-05 02:42:31.900937 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-01-05 02:42:31.900943 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-01-05 02:42:31.900956 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-01-05 02:42:31.900966 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-01-05 02:42:31.900973 | orchestrator | | OS-EXT-STS:task_state | None | 2026-01-05 02:42:31.900979 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-01-05 02:42:31.900985 | orchestrator | | OS-SRV-USG:launched_at | 2026-01-05T02:40:31.000000 | 2026-01-05 02:42:31.900997 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-01-05 02:42:31.901015 | orchestrator | | accessIPv4 | | 2026-01-05 02:42:31.901021 | orchestrator | | accessIPv6 | | 2026-01-05 02:42:31.901028 | orchestrator | | addresses | test=192.168.112.194, 192.168.200.143 | 2026-01-05 02:42:31.901040 | orchestrator | | config_drive | | 2026-01-05 02:42:31.901050 | orchestrator | | created | 2026-01-05T02:40:02Z | 2026-01-05 02:42:31.901056 | orchestrator | | description | None | 2026-01-05 02:42:31.901062 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-01-05 02:42:31.901067 | orchestrator | | hostId | 78e2f5e0416361eb4a68f27b91c8618ceb9e65295c6b0c3e834c6ba5 | 2026-01-05 02:42:31.901073 | orchestrator | | host_status | None | 2026-01-05 02:42:31.901086 | orchestrator | | id | 842aef44-da80-4311-84e6-04f34b5a9e46 | 2026-01-05 02:42:31.901092 | orchestrator | | image | N/A (booted from volume) | 2026-01-05 02:42:31.901099 | orchestrator | | key_name | test | 2026-01-05 02:42:31.901105 | orchestrator | | locked | False | 2026-01-05 02:42:31.901115 | orchestrator | | locked_reason | None | 2026-01-05 02:42:31.901178 | orchestrator | | name | test-4 | 2026-01-05 02:42:31.901184 | orchestrator | | pinned_availability_zone | None | 2026-01-05 02:42:31.901189 | orchestrator | | progress | 0 | 2026-01-05 02:42:31.901193 | orchestrator | | project_id | e70b22e2a92c4ca7a6078fa0202c5d0d | 2026-01-05 02:42:31.901197 | orchestrator | | properties | hostname='test-4' | 2026-01-05 02:42:31.901205 | orchestrator | | security_groups | name='ssh' | 2026-01-05 02:42:31.901209 | orchestrator | | | name='icmp' | 2026-01-05 02:42:31.901213 | orchestrator | | server_groups | None | 2026-01-05 02:42:31.901223 | orchestrator | | status | ACTIVE | 2026-01-05 02:42:31.901229 | orchestrator | | tags | test | 2026-01-05 02:42:31.901239 | orchestrator | | trusted_image_certificates | None | 2026-01-05 02:42:31.901246 | orchestrator | | updated | 2026-01-05T02:41:08Z | 2026-01-05 02:42:31.901252 | orchestrator | | user_id | eaf989eafe424c728d22ec6775d22972 | 2026-01-05 02:42:31.901258 | orchestrator | | volumes_attached | delete_on_termination='True', id='d42da718-bc33-422d-ab4b-0cfb0b1fa848' | 2026-01-05 02:42:31.904946 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-05 02:42:32.220140 | orchestrator | + server_ping 2026-01-05 02:42:32.220924 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-01-05 02:42:32.220943 | orchestrator | ++ tr -d '\r' 2026-01-05 02:42:35.122731 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-05 02:42:35.122840 | orchestrator | + ping -c3 192.168.112.186 2026-01-05 02:42:35.135220 | orchestrator | PING 192.168.112.186 (192.168.112.186) 56(84) bytes of data. 2026-01-05 02:42:35.135355 | orchestrator | 64 bytes from 192.168.112.186: icmp_seq=1 ttl=63 time=5.27 ms 2026-01-05 02:42:36.133588 | orchestrator | 64 bytes from 192.168.112.186: icmp_seq=2 ttl=63 time=2.19 ms 2026-01-05 02:42:37.135280 | orchestrator | 64 bytes from 192.168.112.186: icmp_seq=3 ttl=63 time=1.77 ms 2026-01-05 02:42:37.135405 | orchestrator | 2026-01-05 02:42:37.135417 | orchestrator | --- 192.168.112.186 ping statistics --- 2026-01-05 02:42:37.135426 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-05 02:42:37.135433 | orchestrator | rtt min/avg/max/mdev = 1.769/3.077/5.270/1.560 ms 2026-01-05 02:42:37.135440 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-05 02:42:37.135448 | orchestrator | + ping -c3 192.168.112.181 2026-01-05 02:42:37.147486 | orchestrator | PING 192.168.112.181 (192.168.112.181) 56(84) bytes of data. 2026-01-05 02:42:37.147582 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=1 ttl=63 time=6.80 ms 2026-01-05 02:42:38.145664 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=2 ttl=63 time=2.85 ms 2026-01-05 02:42:39.146106 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=3 ttl=63 time=2.08 ms 2026-01-05 02:42:39.146232 | orchestrator | 2026-01-05 02:42:39.146241 | orchestrator | --- 192.168.112.181 ping statistics --- 2026-01-05 02:42:39.146247 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-05 02:42:39.146252 | orchestrator | rtt min/avg/max/mdev = 2.080/3.908/6.798/2.067 ms 2026-01-05 02:42:39.147394 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-05 02:42:39.147458 | orchestrator | + ping -c3 192.168.112.162 2026-01-05 02:42:39.158836 | orchestrator | PING 192.168.112.162 (192.168.112.162) 56(84) bytes of data. 2026-01-05 02:42:39.158919 | orchestrator | 64 bytes from 192.168.112.162: icmp_seq=1 ttl=63 time=6.74 ms 2026-01-05 02:42:40.156656 | orchestrator | 64 bytes from 192.168.112.162: icmp_seq=2 ttl=63 time=2.73 ms 2026-01-05 02:42:41.158534 | orchestrator | 64 bytes from 192.168.112.162: icmp_seq=3 ttl=63 time=2.04 ms 2026-01-05 02:42:41.158650 | orchestrator | 2026-01-05 02:42:41.158670 | orchestrator | --- 192.168.112.162 ping statistics --- 2026-01-05 02:42:41.158685 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-05 02:42:41.158700 | orchestrator | rtt min/avg/max/mdev = 2.036/3.836/6.742/2.074 ms 2026-01-05 02:42:41.158715 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-05 02:42:41.158729 | orchestrator | + ping -c3 192.168.112.105 2026-01-05 02:42:41.172334 | orchestrator | PING 192.168.112.105 (192.168.112.105) 56(84) bytes of data. 2026-01-05 02:42:41.172444 | orchestrator | 64 bytes from 192.168.112.105: icmp_seq=1 ttl=63 time=8.39 ms 2026-01-05 02:42:42.168013 | orchestrator | 64 bytes from 192.168.112.105: icmp_seq=2 ttl=63 time=2.67 ms 2026-01-05 02:42:43.169344 | orchestrator | 64 bytes from 192.168.112.105: icmp_seq=3 ttl=63 time=1.88 ms 2026-01-05 02:42:43.169428 | orchestrator | 2026-01-05 02:42:43.169436 | orchestrator | --- 192.168.112.105 ping statistics --- 2026-01-05 02:42:43.169443 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-01-05 02:42:43.169464 | orchestrator | rtt min/avg/max/mdev = 1.875/4.311/8.387/2.900 ms 2026-01-05 02:42:43.170423 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-05 02:42:43.170474 | orchestrator | + ping -c3 192.168.112.194 2026-01-05 02:42:43.181609 | orchestrator | PING 192.168.112.194 (192.168.112.194) 56(84) bytes of data. 2026-01-05 02:42:43.181689 | orchestrator | 64 bytes from 192.168.112.194: icmp_seq=1 ttl=63 time=7.55 ms 2026-01-05 02:42:44.178697 | orchestrator | 64 bytes from 192.168.112.194: icmp_seq=2 ttl=63 time=2.73 ms 2026-01-05 02:42:45.180836 | orchestrator | 64 bytes from 192.168.112.194: icmp_seq=3 ttl=63 time=2.56 ms 2026-01-05 02:42:45.180967 | orchestrator | 2026-01-05 02:42:45.181113 | orchestrator | --- 192.168.112.194 ping statistics --- 2026-01-05 02:42:45.181134 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-01-05 02:42:45.181177 | orchestrator | rtt min/avg/max/mdev = 2.555/4.276/7.548/2.314 ms 2026-01-05 02:42:45.181200 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-01-05 02:42:45.504917 | orchestrator | ok: Runtime: 0:13:09.969024 2026-01-05 02:42:45.568586 | 2026-01-05 02:42:45.568748 | TASK [Run tempest] 2026-01-05 02:42:46.105158 | orchestrator | skipping: Conditional result was False 2026-01-05 02:42:46.128771 | 2026-01-05 02:42:46.128978 | TASK [Check prometheus alert status] 2026-01-05 02:42:46.671320 | orchestrator | skipping: Conditional result was False 2026-01-05 02:42:46.686088 | 2026-01-05 02:42:46.686258 | PLAY [Upgrade testbed] 2026-01-05 02:42:46.700103 | 2026-01-05 02:42:46.700303 | TASK [Print next ceph version] 2026-01-05 02:42:46.779461 | orchestrator | ok 2026-01-05 02:42:46.792617 | 2026-01-05 02:42:46.792796 | TASK [Print next openstack version] 2026-01-05 02:42:46.872424 | orchestrator | ok 2026-01-05 02:42:46.883990 | 2026-01-05 02:42:46.884123 | TASK [Print next manager version] 2026-01-05 02:42:46.955508 | orchestrator | ok 2026-01-05 02:42:46.966785 | 2026-01-05 02:42:46.966948 | TASK [Set facts (Zuul deployment)] 2026-01-05 02:42:47.036482 | orchestrator | ok 2026-01-05 02:42:47.049615 | 2026-01-05 02:42:47.049787 | TASK [Set facts (local deployment)] 2026-01-05 02:42:47.085972 | orchestrator | skipping: Conditional result was False 2026-01-05 02:42:47.103110 | 2026-01-05 02:42:47.103275 | TASK [Fetch manager address] 2026-01-05 02:42:47.411501 | orchestrator | ok 2026-01-05 02:42:47.421507 | 2026-01-05 02:42:47.421724 | TASK [Set manager_host address] 2026-01-05 02:42:47.501776 | orchestrator | ok 2026-01-05 02:42:47.514131 | 2026-01-05 02:42:47.514276 | TASK [Run upgrade] 2026-01-05 02:42:48.305445 | orchestrator | + set -e 2026-01-05 02:42:48.305594 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-05 02:42:48.305607 | orchestrator | ++ export INTERACTIVE=false 2026-01-05 02:42:48.305618 | orchestrator | ++ INTERACTIVE=false 2026-01-05 02:42:48.305624 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-05 02:42:48.305630 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-05 02:42:48.306081 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "org.opencontainers.image.version"}}' osism-ansible 2026-01-05 02:42:48.341275 | orchestrator | + OLD_MANAGER_VERSION=v0.20251130.0 2026-01-05 02:42:48.342337 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "de.osism.release.openstack"}}' kolla-ansible 2026-01-05 02:42:48.371926 | orchestrator | 2026-01-05 02:42:48.372014 | orchestrator | # UPGRADE 2026-01-05 02:42:48.372025 | orchestrator | 2026-01-05 02:42:48.372030 | orchestrator | + OLD_OPENSTACK_VERSION=2024.2 2026-01-05 02:42:48.372036 | orchestrator | + echo 2026-01-05 02:42:48.372043 | orchestrator | + echo '# UPGRADE' 2026-01-05 02:42:48.372047 | orchestrator | + echo 2026-01-05 02:42:48.372051 | orchestrator | + export MANAGER_VERSION=latest 2026-01-05 02:42:48.372056 | orchestrator | + MANAGER_VERSION=latest 2026-01-05 02:42:48.372064 | orchestrator | + CEPH_VERSION=skip 2026-01-05 02:42:48.372068 | orchestrator | + OPENSTACK_VERSION=2024.2 2026-01-05 02:42:48.372072 | orchestrator | + KOLLA_NAMESPACE=kolla 2026-01-05 02:42:48.372076 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh latest 2026-01-05 02:42:48.377712 | orchestrator | + set -e 2026-01-05 02:42:48.377784 | orchestrator | + VERSION=latest 2026-01-05 02:42:48.377792 | orchestrator | + sed -i 's/manager_version: .*/manager_version: latest/g' /opt/configuration/environments/manager/configuration.yml 2026-01-05 02:42:48.385066 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-05 02:42:48.385165 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-01-05 02:42:48.390212 | orchestrator | + set -e 2026-01-05 02:42:48.390273 | orchestrator | + pushd /opt/configuration 2026-01-05 02:42:48.390291 | orchestrator | /opt/configuration ~ 2026-01-05 02:42:48.390298 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-05 02:42:48.390305 | orchestrator | + source /opt/venv/bin/activate 2026-01-05 02:42:48.391780 | orchestrator | ++ deactivate nondestructive 2026-01-05 02:42:48.391830 | orchestrator | ++ '[' -n '' ']' 2026-01-05 02:42:48.391836 | orchestrator | ++ '[' -n '' ']' 2026-01-05 02:42:48.391840 | orchestrator | ++ hash -r 2026-01-05 02:42:48.391845 | orchestrator | ++ '[' -n '' ']' 2026-01-05 02:42:48.391849 | orchestrator | ++ unset VIRTUAL_ENV 2026-01-05 02:42:48.391853 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-01-05 02:42:48.391860 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-01-05 02:42:48.391876 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-01-05 02:42:48.391884 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-01-05 02:42:48.391891 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-01-05 02:42:48.391897 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-01-05 02:42:48.391904 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-05 02:42:48.391912 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-05 02:42:48.391919 | orchestrator | ++ export PATH 2026-01-05 02:42:48.391926 | orchestrator | ++ '[' -n '' ']' 2026-01-05 02:42:48.391931 | orchestrator | ++ '[' -z '' ']' 2026-01-05 02:42:48.391935 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-01-05 02:42:48.391939 | orchestrator | ++ PS1='(venv) ' 2026-01-05 02:42:48.391943 | orchestrator | ++ export PS1 2026-01-05 02:42:48.391947 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-01-05 02:42:48.391951 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-01-05 02:42:48.391954 | orchestrator | ++ hash -r 2026-01-05 02:42:48.391958 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-01-05 02:42:49.662430 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-01-05 02:42:49.663353 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-01-05 02:42:49.664930 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-01-05 02:42:49.666536 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-01-05 02:42:49.667795 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (25.0) 2026-01-05 02:42:49.680382 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-01-05 02:42:49.682315 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-01-05 02:42:49.683555 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-01-05 02:42:49.685194 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-01-05 02:42:49.720398 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.4) 2026-01-05 02:42:49.721849 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-01-05 02:42:49.723515 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.2) 2026-01-05 02:42:49.724925 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.1.4) 2026-01-05 02:42:49.728987 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-01-05 02:42:49.953756 | orchestrator | ++ which gilt 2026-01-05 02:42:49.954896 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-01-05 02:42:49.954925 | orchestrator | + /opt/venv/bin/gilt overlay 2026-01-05 02:42:50.196304 | orchestrator | osism.cfg-generics: 2026-01-05 02:42:50.280365 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-01-05 02:42:50.280959 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-01-05 02:42:50.282391 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-01-05 02:42:50.282418 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-01-05 02:42:51.270532 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-01-05 02:42:51.288028 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-01-05 02:42:51.754475 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-01-05 02:42:51.818073 | orchestrator | ~ 2026-01-05 02:42:51.818179 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-05 02:42:51.818188 | orchestrator | + deactivate 2026-01-05 02:42:51.818196 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-01-05 02:42:51.818202 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-05 02:42:51.818207 | orchestrator | + export PATH 2026-01-05 02:42:51.818211 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-01-05 02:42:51.818216 | orchestrator | + '[' -n '' ']' 2026-01-05 02:42:51.818220 | orchestrator | + hash -r 2026-01-05 02:42:51.818225 | orchestrator | + '[' -n '' ']' 2026-01-05 02:42:51.818230 | orchestrator | + unset VIRTUAL_ENV 2026-01-05 02:42:51.818234 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-01-05 02:42:51.818239 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-01-05 02:42:51.818243 | orchestrator | + unset -f deactivate 2026-01-05 02:42:51.818247 | orchestrator | + popd 2026-01-05 02:42:51.819519 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-05 02:42:51.819535 | orchestrator | + [[ skip != \s\k\i\p ]] 2026-01-05 02:42:51.820071 | orchestrator | + echo 'export SKIP_CEPH_UPGRADE=true' 2026-01-05 02:42:51.820991 | orchestrator | + sudo tee -a /opt/manager-vars.sh 2026-01-05 02:42:51.838286 | orchestrator | export SKIP_CEPH_UPGRADE=true 2026-01-05 02:42:51.840051 | orchestrator | + [[ 2024.2 != \s\k\i\p ]] 2026-01-05 02:42:51.840085 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2026-01-05 02:42:51.846492 | orchestrator | + set -e 2026-01-05 02:42:51.846587 | orchestrator | + VERSION=2024.2 2026-01-05 02:42:51.847289 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2026-01-05 02:42:51.850682 | orchestrator | + [[ -n '' ]] 2026-01-05 02:42:51.850779 | orchestrator | + sed -i -e '/manager_version: .*/a\' -e 'openstack_version: 2024.2' /opt/configuration/environments/manager/configuration.yml 2026-01-05 02:42:51.856903 | orchestrator | + echo 'export SKIP_OPENSTACK_UPGRADE=false' 2026-01-05 02:42:51.856991 | orchestrator | + sudo tee -a /opt/manager-vars.sh 2026-01-05 02:42:51.869826 | orchestrator | export SKIP_OPENSTACK_UPGRADE=false 2026-01-05 02:42:51.873675 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla 2026-01-05 02:42:51.881708 | orchestrator | + set -e 2026-01-05 02:42:51.881787 | orchestrator | + NAMESPACE=kolla 2026-01-05 02:42:51.881829 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-01-05 02:42:51.890079 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-01-05 02:42:51.898764 | orchestrator | /opt/configuration ~ 2026-01-05 02:42:51.898893 | orchestrator | + set -e 2026-01-05 02:42:51.898909 | orchestrator | + pushd /opt/configuration 2026-01-05 02:42:51.898920 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-05 02:42:51.898931 | orchestrator | + source /opt/venv/bin/activate 2026-01-05 02:42:51.898955 | orchestrator | ++ deactivate nondestructive 2026-01-05 02:42:51.898966 | orchestrator | ++ '[' -n '' ']' 2026-01-05 02:42:51.898976 | orchestrator | ++ '[' -n '' ']' 2026-01-05 02:42:51.898987 | orchestrator | ++ hash -r 2026-01-05 02:42:51.898998 | orchestrator | ++ '[' -n '' ']' 2026-01-05 02:42:51.899007 | orchestrator | ++ unset VIRTUAL_ENV 2026-01-05 02:42:51.899023 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-01-05 02:42:51.899212 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-01-05 02:42:51.899229 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-01-05 02:42:51.899239 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-01-05 02:42:51.899253 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-01-05 02:42:51.899264 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-01-05 02:42:51.899275 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-05 02:42:51.899287 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-05 02:42:51.899742 | orchestrator | ++ export PATH 2026-01-05 02:42:51.899764 | orchestrator | ++ '[' -n '' ']' 2026-01-05 02:42:51.899777 | orchestrator | ++ '[' -z '' ']' 2026-01-05 02:42:51.899788 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-01-05 02:42:51.899799 | orchestrator | ++ PS1='(venv) ' 2026-01-05 02:42:51.899810 | orchestrator | ++ export PS1 2026-01-05 02:42:51.899819 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-01-05 02:42:51.899826 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-01-05 02:42:51.899833 | orchestrator | ++ hash -r 2026-01-05 02:42:51.899840 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-01-05 02:42:52.468492 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-01-05 02:42:52.469438 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-01-05 02:42:52.470903 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-01-05 02:42:52.472247 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-01-05 02:42:52.473322 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (25.0) 2026-01-05 02:42:52.483533 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-01-05 02:42:52.485361 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-01-05 02:42:52.486394 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-01-05 02:42:52.487782 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-01-05 02:42:52.521831 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.4) 2026-01-05 02:42:52.523335 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-01-05 02:42:52.525101 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.2) 2026-01-05 02:42:52.526319 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.1.4) 2026-01-05 02:42:52.530403 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-01-05 02:42:52.753945 | orchestrator | ++ which gilt 2026-01-05 02:42:52.757332 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-01-05 02:42:52.757413 | orchestrator | + /opt/venv/bin/gilt overlay 2026-01-05 02:42:52.937074 | orchestrator | osism.cfg-generics: 2026-01-05 02:42:53.015392 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-01-05 02:42:53.015483 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-01-05 02:42:53.015495 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-01-05 02:42:53.015505 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-01-05 02:42:53.485667 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-01-05 02:42:53.498338 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-01-05 02:42:53.851039 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-01-05 02:42:53.924573 | orchestrator | ~ 2026-01-05 02:42:53.924655 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-05 02:42:53.924665 | orchestrator | + deactivate 2026-01-05 02:42:53.924674 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-01-05 02:42:53.924683 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-05 02:42:53.924691 | orchestrator | + export PATH 2026-01-05 02:42:53.924698 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-01-05 02:42:53.924704 | orchestrator | + '[' -n '' ']' 2026-01-05 02:42:53.924711 | orchestrator | + hash -r 2026-01-05 02:42:53.924718 | orchestrator | + '[' -n '' ']' 2026-01-05 02:42:53.924725 | orchestrator | + unset VIRTUAL_ENV 2026-01-05 02:42:53.924755 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-01-05 02:42:53.924762 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-01-05 02:42:53.924769 | orchestrator | + unset -f deactivate 2026-01-05 02:42:53.924776 | orchestrator | + popd 2026-01-05 02:42:53.926736 | orchestrator | ++ semver v0.20251130.0 6.0.0 2026-01-05 02:42:53.989848 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-05 02:42:53.990749 | orchestrator | ++ semver latest 10.0.0-0 2026-01-05 02:42:54.060992 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-05 02:42:54.062257 | orchestrator | ++ semver 2024.2 2025.1 2026-01-05 02:42:54.129272 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-05 02:42:54.131646 | orchestrator | +++ semver v0.20251130.0 9.5.0 2026-01-05 02:42:54.194962 | orchestrator | ++ '[' -1 -le 0 ']' 2026-01-05 02:42:54.195373 | orchestrator | +++ semver latest 10.0.0-0 2026-01-05 02:42:54.259854 | orchestrator | ++ '[' -1 -ge 0 ']' 2026-01-05 02:42:54.259939 | orchestrator | ++ echo false 2026-01-05 02:42:54.260735 | orchestrator | + MANAGER_UPGRADE_CROSSES_10=false 2026-01-05 02:42:54.262648 | orchestrator | +++ semver 2024.2 2024.2 2026-01-05 02:42:54.347202 | orchestrator | ++ '[' 0 -le 0 ']' 2026-01-05 02:42:54.348430 | orchestrator | +++ semver 2024.2 2025.1 2026-01-05 02:42:54.416712 | orchestrator | ++ '[' -1 -ge 0 ']' 2026-01-05 02:42:54.416852 | orchestrator | ++ echo false 2026-01-05 02:42:54.417845 | orchestrator | + OPENSTACK_UPGRADE_CROSSES_2025=false 2026-01-05 02:42:54.417889 | orchestrator | + [[ false == \t\r\u\e ]] 2026-01-05 02:42:54.417899 | orchestrator | + [[ false == \t\r\u\e ]] 2026-01-05 02:42:54.417909 | orchestrator | + osism update manager 2026-01-05 02:42:59.812639 | orchestrator | Collecting ansible==11.11.0 2026-01-05 02:42:59.930658 | orchestrator | Downloading ansible-11.11.0-py3-none-any.whl.metadata (8.1 kB) 2026-01-05 02:42:59.983547 | orchestrator | Collecting netaddr==1.3.0 2026-01-05 02:43:00.002913 | orchestrator | Downloading netaddr-1.3.0-py3-none-any.whl.metadata (5.0 kB) 2026-01-05 02:43:00.158720 | orchestrator | Collecting ansible-core~=2.18.10 (from ansible==11.11.0) 2026-01-05 02:43:00.176900 | orchestrator | Downloading ansible_core-2.18.12-py3-none-any.whl.metadata (7.7 kB) 2026-01-05 02:43:00.225082 | orchestrator | Collecting jinja2>=3.0.0 (from ansible-core~=2.18.10->ansible==11.11.0) 2026-01-05 02:43:00.242394 | orchestrator | Downloading jinja2-3.1.6-py3-none-any.whl.metadata (2.9 kB) 2026-01-05 02:43:00.323228 | orchestrator | Collecting PyYAML>=5.1 (from ansible-core~=2.18.10->ansible==11.11.0) 2026-01-05 02:43:00.340128 | orchestrator | Downloading pyyaml-6.0.3-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl.metadata (2.4 kB) 2026-01-05 02:43:00.754729 | orchestrator | Collecting cryptography (from ansible-core~=2.18.10->ansible==11.11.0) 2026-01-05 02:43:00.770422 | orchestrator | Downloading cryptography-46.0.3-cp311-abi3-manylinux_2_34_x86_64.whl.metadata (5.7 kB) 2026-01-05 02:43:00.813589 | orchestrator | Collecting packaging (from ansible-core~=2.18.10->ansible==11.11.0) 2026-01-05 02:43:00.828973 | orchestrator | Downloading packaging-25.0-py3-none-any.whl.metadata (3.3 kB) 2026-01-05 02:43:00.859731 | orchestrator | Collecting resolvelib<1.1.0,>=0.5.3 (from ansible-core~=2.18.10->ansible==11.11.0) 2026-01-05 02:43:00.875390 | orchestrator | Downloading resolvelib-1.0.1-py2.py3-none-any.whl.metadata (4.0 kB) 2026-01-05 02:43:00.975521 | orchestrator | Collecting MarkupSafe>=2.0 (from jinja2>=3.0.0->ansible-core~=2.18.10->ansible==11.11.0) 2026-01-05 02:43:00.991189 | orchestrator | Downloading markupsafe-3.0.3-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl.metadata (2.7 kB) 2026-01-05 02:43:01.212619 | orchestrator | Collecting cffi>=2.0.0 (from cryptography->ansible-core~=2.18.10->ansible==11.11.0) 2026-01-05 02:43:01.227853 | orchestrator | Downloading cffi-2.0.0-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (2.6 kB) 2026-01-05 02:43:01.258345 | orchestrator | Collecting pycparser (from cffi>=2.0.0->cryptography->ansible-core~=2.18.10->ansible==11.11.0) 2026-01-05 02:43:01.273422 | orchestrator | Downloading pycparser-2.23-py3-none-any.whl.metadata (993 bytes) 2026-01-05 02:43:01.297481 | orchestrator | Downloading ansible-11.11.0-py3-none-any.whl (57.1 MB) 2026-01-05 02:43:03.150995 | orchestrator | ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 57.1/57.1 MB 35.0 MB/s eta 0:00:00 2026-01-05 02:43:03.169589 | orchestrator | Downloading netaddr-1.3.0-py3-none-any.whl (2.3 MB) 2026-01-05 02:43:03.235793 | orchestrator | ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.3/2.3 MB 37.0 MB/s eta 0:00:00 2026-01-05 02:43:03.255792 | orchestrator | Downloading ansible_core-2.18.12-py3-none-any.whl (2.2 MB) 2026-01-05 02:43:03.320612 | orchestrator | ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.2/2.2 MB 37.3 MB/s eta 0:00:00 2026-01-05 02:43:03.337799 | orchestrator | Downloading jinja2-3.1.6-py3-none-any.whl (134 kB) 2026-01-05 02:43:03.345538 | orchestrator | ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 134.9/134.9 kB 122.6 MB/s eta 0:00:00 2026-01-05 02:43:03.365683 | orchestrator | Downloading pyyaml-6.0.3-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl (807 kB) 2026-01-05 02:43:03.390550 | orchestrator | ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 807.9/807.9 kB 41.0 MB/s eta 0:00:00 2026-01-05 02:43:03.409255 | orchestrator | Downloading resolvelib-1.0.1-py2.py3-none-any.whl (17 kB) 2026-01-05 02:43:03.427921 | orchestrator | Downloading cryptography-46.0.3-cp311-abi3-manylinux_2_34_x86_64.whl (4.5 MB) 2026-01-05 02:43:03.559567 | orchestrator | ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.5/4.5 MB 36.3 MB/s eta 0:00:00 2026-01-05 02:43:03.578224 | orchestrator | Downloading packaging-25.0-py3-none-any.whl (66 kB) 2026-01-05 02:43:03.583109 | orchestrator | ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 66.5/66.5 kB 134.3 MB/s eta 0:00:00 2026-01-05 02:43:03.600409 | orchestrator | Downloading cffi-2.0.0-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (219 kB) 2026-01-05 02:43:03.606302 | orchestrator | ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 219.6/219.6 kB 188.8 MB/s eta 0:00:00 2026-01-05 02:43:03.622853 | orchestrator | Downloading markupsafe-3.0.3-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl (22 kB) 2026-01-05 02:43:03.640716 | orchestrator | Downloading pycparser-2.23-py3-none-any.whl (118 kB) 2026-01-05 02:43:03.646678 | orchestrator | ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 118.1/118.1 kB 150.8 MB/s eta 0:00:00 2026-01-05 02:43:04.280649 | orchestrator | Installing collected packages: resolvelib, PyYAML, pycparser, packaging, netaddr, MarkupSafe, jinja2, cffi, cryptography, ansible-core, ansible 2026-01-05 02:43:41.454641 | orchestrator | Successfully installed MarkupSafe-3.0.3 PyYAML-6.0.3 ansible-11.11.0 ansible-core-2.18.12 cffi-2.0.0 cryptography-46.0.3 jinja2-3.1.6 netaddr-1.3.0 packaging-25.0 pycparser-2.23 resolvelib-1.0.1 2026-01-05 02:43:42.214114 | orchestrator | Cloning into '/home/dragon/.ansible/tmp/ansible-local-206602f9xzw_ii/tmpu1zu3_uy/ansible-collection-serviceswy3nwea2'... 2026-01-05 02:43:43.624186 | orchestrator | Your branch is up to date with 'origin/main'. 2026-01-05 02:43:43.624310 | orchestrator | Already on 'main' 2026-01-05 02:43:44.167408 | orchestrator | Starting galaxy collection install process 2026-01-05 02:43:44.167514 | orchestrator | Process install dependency map 2026-01-05 02:43:44.167526 | orchestrator | Starting collection install process 2026-01-05 02:43:44.167534 | orchestrator | Installing 'osism.services:999.0.0' to '/home/dragon/.ansible/collections/ansible_collections/osism/services' 2026-01-05 02:43:44.167543 | orchestrator | Created collection for osism.services:999.0.0 at /home/dragon/.ansible/collections/ansible_collections/osism/services 2026-01-05 02:43:44.167551 | orchestrator | osism.services:999.0.0 was installed successfully 2026-01-05 02:43:44.672815 | orchestrator | Cloning into '/home/dragon/.ansible/tmp/ansible-local-206668x884w0xy/tmpgw5y3uoo/ansible-playbooks-managerwoe5wee1'... 2026-01-05 02:43:45.268269 | orchestrator | Your branch is up to date with 'origin/main'. 2026-01-05 02:43:45.268354 | orchestrator | Already on 'main' 2026-01-05 02:43:45.576162 | orchestrator | Starting galaxy collection install process 2026-01-05 02:43:45.576261 | orchestrator | Process install dependency map 2026-01-05 02:43:45.576272 | orchestrator | Starting collection install process 2026-01-05 02:43:45.576280 | orchestrator | Installing 'osism.manager:999.0.0' to '/home/dragon/.ansible/collections/ansible_collections/osism/manager' 2026-01-05 02:43:45.576287 | orchestrator | Created collection for osism.manager:999.0.0 at /home/dragon/.ansible/collections/ansible_collections/osism/manager 2026-01-05 02:43:45.576293 | orchestrator | osism.manager:999.0.0 was installed successfully 2026-01-05 02:43:46.171940 | orchestrator | [WARNING]: Invalid characters were found in group names but not replaced, use 2026-01-05 02:43:46.172040 | orchestrator | -vvvv to see details 2026-01-05 02:43:46.358515 | orchestrator | 2026-01-05 02:43:46.358617 | orchestrator | PLAY [Apply role manager] ****************************************************** 2026-01-05 02:43:46.358631 | orchestrator | 2026-01-05 02:43:46.358641 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-05 02:43:50.555790 | orchestrator | ok: [testbed-manager] 2026-01-05 02:43:50.555919 | orchestrator | 2026-01-05 02:43:50.555944 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-01-05 02:43:50.629618 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-01-05 02:43:50.629711 | orchestrator | 2026-01-05 02:43:50.629721 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-01-05 02:43:52.769051 | orchestrator | ok: [testbed-manager] 2026-01-05 02:43:52.769135 | orchestrator | 2026-01-05 02:43:52.769143 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-01-05 02:43:52.814376 | orchestrator | ok: [testbed-manager] 2026-01-05 02:43:52.814471 | orchestrator | 2026-01-05 02:43:52.814486 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-01-05 02:43:52.891634 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-01-05 02:43:52.891717 | orchestrator | 2026-01-05 02:43:52.891726 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-01-05 02:43:57.363040 | orchestrator | ok: [testbed-manager] => (item=/opt/ansible) 2026-01-05 02:43:57.363267 | orchestrator | ok: [testbed-manager] => (item=/opt/archive) 2026-01-05 02:43:57.363301 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/configuration) 2026-01-05 02:43:57.363320 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/data) 2026-01-05 02:43:57.363332 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-01-05 02:43:57.363343 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/secrets) 2026-01-05 02:43:57.363354 | orchestrator | ok: [testbed-manager] => (item=/opt/ansible/secrets) 2026-01-05 02:43:57.363365 | orchestrator | ok: [testbed-manager] => (item=/opt/state) 2026-01-05 02:43:57.363376 | orchestrator | 2026-01-05 02:43:57.363388 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-01-05 02:43:58.467866 | orchestrator | ok: [testbed-manager] 2026-01-05 02:43:58.467992 | orchestrator | 2026-01-05 02:43:58.468013 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-01-05 02:43:59.522582 | orchestrator | ok: [testbed-manager] 2026-01-05 02:43:59.522674 | orchestrator | 2026-01-05 02:43:59.522687 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-01-05 02:43:59.612077 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-01-05 02:43:59.612184 | orchestrator | 2026-01-05 02:43:59.612313 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-01-05 02:44:01.504103 | orchestrator | ok: [testbed-manager] => (item=ara) 2026-01-05 02:44:01.504178 | orchestrator | ok: [testbed-manager] => (item=ara-server) 2026-01-05 02:44:01.504183 | orchestrator | 2026-01-05 02:44:01.504189 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-01-05 02:44:02.456735 | orchestrator | ok: [testbed-manager] 2026-01-05 02:44:02.456840 | orchestrator | 2026-01-05 02:44:02.456851 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-01-05 02:44:02.508838 | orchestrator | skipping: [testbed-manager] 2026-01-05 02:44:02.508919 | orchestrator | 2026-01-05 02:44:02.508928 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-01-05 02:44:02.598648 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-01-05 02:44:02.598721 | orchestrator | 2026-01-05 02:44:02.598729 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-01-05 02:44:03.596161 | orchestrator | ok: [testbed-manager] 2026-01-05 02:44:03.596289 | orchestrator | 2026-01-05 02:44:03.596301 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-01-05 02:44:03.657692 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-01-05 02:44:03.657782 | orchestrator | 2026-01-05 02:44:03.657794 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-01-05 02:44:05.723278 | orchestrator | ok: [testbed-manager] => (item=None) 2026-01-05 02:44:05.724150 | orchestrator | ok: [testbed-manager] => (item=None) 2026-01-05 02:44:05.724185 | orchestrator | ok: [testbed-manager] 2026-01-05 02:44:05.724198 | orchestrator | 2026-01-05 02:44:05.724227 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-01-05 02:44:06.719829 | orchestrator | ok: [testbed-manager] 2026-01-05 02:44:06.719938 | orchestrator | 2026-01-05 02:44:06.719953 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-01-05 02:44:06.791999 | orchestrator | skipping: [testbed-manager] 2026-01-05 02:44:06.792100 | orchestrator | 2026-01-05 02:44:06.792118 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-01-05 02:44:06.887084 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-01-05 02:44:06.887194 | orchestrator | 2026-01-05 02:44:06.887209 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-01-05 02:44:07.609163 | orchestrator | ok: [testbed-manager] 2026-01-05 02:44:07.609302 | orchestrator | 2026-01-05 02:44:07.609316 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-01-05 02:44:08.156492 | orchestrator | ok: [testbed-manager] 2026-01-05 02:44:08.156607 | orchestrator | 2026-01-05 02:44:08.156644 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-01-05 02:44:10.029192 | orchestrator | ok: [testbed-manager] => (item=conductor) 2026-01-05 02:44:10.029359 | orchestrator | ok: [testbed-manager] => (item=openstack) 2026-01-05 02:44:10.029374 | orchestrator | 2026-01-05 02:44:10.029385 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-01-05 02:44:11.059007 | orchestrator | ok: [testbed-manager] 2026-01-05 02:44:11.059978 | orchestrator | 2026-01-05 02:44:11.060028 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-01-05 02:44:11.659491 | orchestrator | ok: [testbed-manager] 2026-01-05 02:44:11.659587 | orchestrator | 2026-01-05 02:44:11.659597 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-01-05 02:44:12.331268 | orchestrator | ok: [testbed-manager] 2026-01-05 02:44:12.331361 | orchestrator | 2026-01-05 02:44:12.331394 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-01-05 02:44:12.382270 | orchestrator | skipping: [testbed-manager] 2026-01-05 02:44:12.382349 | orchestrator | 2026-01-05 02:44:12.382357 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-01-05 02:44:12.470572 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-01-05 02:44:12.470648 | orchestrator | 2026-01-05 02:44:12.470655 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-01-05 02:44:12.530869 | orchestrator | ok: [testbed-manager] 2026-01-05 02:44:12.530985 | orchestrator | 2026-01-05 02:44:12.531010 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-01-05 02:44:15.494066 | orchestrator | ok: [testbed-manager] => (item=osism) 2026-01-05 02:44:15.494154 | orchestrator | ok: [testbed-manager] => (item=osism-update-docker) 2026-01-05 02:44:15.494165 | orchestrator | ok: [testbed-manager] => (item=osism-update-manager) 2026-01-05 02:44:15.494172 | orchestrator | 2026-01-05 02:44:15.494180 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-01-05 02:44:16.458482 | orchestrator | ok: [testbed-manager] 2026-01-05 02:44:16.458579 | orchestrator | 2026-01-05 02:44:16.458590 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-01-05 02:44:17.510157 | orchestrator | ok: [testbed-manager] 2026-01-05 02:44:17.510304 | orchestrator | 2026-01-05 02:44:17.510314 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-01-05 02:44:18.513195 | orchestrator | ok: [testbed-manager] 2026-01-05 02:44:18.513371 | orchestrator | 2026-01-05 02:44:18.513379 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-01-05 02:44:18.596056 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-01-05 02:44:18.596135 | orchestrator | 2026-01-05 02:44:18.596142 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-01-05 02:44:18.649262 | orchestrator | ok: [testbed-manager] 2026-01-05 02:44:18.649362 | orchestrator | 2026-01-05 02:44:18.649377 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-01-05 02:44:19.673966 | orchestrator | ok: [testbed-manager] => (item=osism-include) 2026-01-05 02:44:19.674069 | orchestrator | 2026-01-05 02:44:19.674077 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-01-05 02:44:19.769784 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-01-05 02:44:19.769890 | orchestrator | 2026-01-05 02:44:19.769903 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-01-05 02:44:20.778919 | orchestrator | ok: [testbed-manager] 2026-01-05 02:44:20.779020 | orchestrator | 2026-01-05 02:44:20.779032 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-01-05 02:44:21.854565 | orchestrator | ok: [testbed-manager] 2026-01-05 02:44:21.854662 | orchestrator | 2026-01-05 02:44:21.854674 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-01-05 02:44:21.917599 | orchestrator | skipping: [testbed-manager] 2026-01-05 02:44:21.917700 | orchestrator | 2026-01-05 02:44:21.917708 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-01-05 02:44:21.982555 | orchestrator | ok: [testbed-manager] 2026-01-05 02:44:21.982667 | orchestrator | 2026-01-05 02:44:21.982679 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-01-05 02:44:23.326334 | orchestrator | changed: [testbed-manager] 2026-01-05 02:44:23.326438 | orchestrator | 2026-01-05 02:44:23.326449 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-01-05 02:45:31.897152 | orchestrator | changed: [testbed-manager] 2026-01-05 02:45:31.897241 | orchestrator | 2026-01-05 02:45:31.897250 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-01-05 02:45:33.240751 | orchestrator | ok: [testbed-manager] 2026-01-05 02:45:33.240829 | orchestrator | 2026-01-05 02:45:33.240836 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-01-05 02:45:33.309983 | orchestrator | skipping: [testbed-manager] 2026-01-05 02:45:33.310103 | orchestrator | 2026-01-05 02:45:33.310113 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-01-05 02:45:34.266268 | orchestrator | ok: [testbed-manager] 2026-01-05 02:45:34.266394 | orchestrator | 2026-01-05 02:45:34.266403 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-01-05 02:45:34.352522 | orchestrator | skipping: [testbed-manager] 2026-01-05 02:45:34.352594 | orchestrator | 2026-01-05 02:45:34.352601 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-01-05 02:45:34.352606 | orchestrator | 2026-01-05 02:45:34.352610 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-01-05 02:45:52.245236 | orchestrator | changed: [testbed-manager] 2026-01-05 02:45:52.245381 | orchestrator | 2026-01-05 02:45:52.245402 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-01-05 02:46:52.316675 | orchestrator | Pausing for 60 seconds 2026-01-05 02:46:52.316765 | orchestrator | changed: [testbed-manager] 2026-01-05 02:46:52.316772 | orchestrator | 2026-01-05 02:46:52.316778 | orchestrator | RUNNING HANDLER [osism.services.manager : Register that manager service was restarted] *** 2026-01-05 02:46:52.377928 | orchestrator | ok: [testbed-manager] 2026-01-05 02:46:52.378101 | orchestrator | 2026-01-05 02:46:52.378119 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-01-05 02:46:55.944742 | orchestrator | changed: [testbed-manager] 2026-01-05 02:46:55.944828 | orchestrator | 2026-01-05 02:46:55.944836 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-01-05 02:47:58.861784 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-01-05 02:47:58.861918 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-01-05 02:47:58.861940 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-01-05 02:47:58.861956 | orchestrator | changed: [testbed-manager] 2026-01-05 02:47:58.861974 | orchestrator | 2026-01-05 02:47:58.861991 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-01-05 02:48:10.720375 | orchestrator | changed: [testbed-manager] 2026-01-05 02:48:10.720495 | orchestrator | 2026-01-05 02:48:10.720503 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-01-05 02:48:10.808155 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-01-05 02:48:10.808249 | orchestrator | 2026-01-05 02:48:10.808267 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-01-05 02:48:10.808282 | orchestrator | 2026-01-05 02:48:10.808295 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-01-05 02:48:10.866766 | orchestrator | skipping: [testbed-manager] 2026-01-05 02:48:10.866841 | orchestrator | 2026-01-05 02:48:10.866848 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-01-05 02:48:10.929082 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-01-05 02:48:10.929225 | orchestrator | 2026-01-05 02:48:10.929235 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-01-05 02:48:12.109417 | orchestrator | changed: [testbed-manager] 2026-01-05 02:48:12.109494 | orchestrator | 2026-01-05 02:48:12.109501 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-01-05 02:48:15.720834 | orchestrator | ok: [testbed-manager] 2026-01-05 02:48:15.720911 | orchestrator | 2026-01-05 02:48:15.720919 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-01-05 02:48:15.809708 | orchestrator | ok: [testbed-manager] => { 2026-01-05 02:48:15.809805 | orchestrator | "version_check_result.stdout_lines": [ 2026-01-05 02:48:15.809818 | orchestrator | "=== OSISM Container Version Check ===", 2026-01-05 02:48:15.809826 | orchestrator | "Checking running containers against expected versions...", 2026-01-05 02:48:15.809835 | orchestrator | "", 2026-01-05 02:48:15.809844 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-01-05 02:48:15.809852 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2026-01-05 02:48:15.809860 | orchestrator | " Enabled: true", 2026-01-05 02:48:15.809867 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2026-01-05 02:48:15.809875 | orchestrator | " Status: ✅ MATCH", 2026-01-05 02:48:15.809882 | orchestrator | "", 2026-01-05 02:48:15.809889 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-01-05 02:48:15.809905 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2026-01-05 02:48:15.809913 | orchestrator | " Enabled: true", 2026-01-05 02:48:15.809921 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2026-01-05 02:48:15.809928 | orchestrator | " Status: ✅ MATCH", 2026-01-05 02:48:15.809935 | orchestrator | "", 2026-01-05 02:48:15.809942 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-01-05 02:48:15.809950 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2026-01-05 02:48:15.809957 | orchestrator | " Enabled: true", 2026-01-05 02:48:15.809964 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2026-01-05 02:48:15.809971 | orchestrator | " Status: ✅ MATCH", 2026-01-05 02:48:15.809979 | orchestrator | "", 2026-01-05 02:48:15.809986 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-01-05 02:48:15.809993 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:quincy", 2026-01-05 02:48:15.810000 | orchestrator | " Enabled: true", 2026-01-05 02:48:15.810007 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:quincy", 2026-01-05 02:48:15.810062 | orchestrator | " Status: ✅ MATCH", 2026-01-05 02:48:15.810072 | orchestrator | "", 2026-01-05 02:48:15.810079 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-01-05 02:48:15.810087 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-01-05 02:48:15.810094 | orchestrator | " Enabled: true", 2026-01-05 02:48:15.810101 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-01-05 02:48:15.810108 | orchestrator | " Status: ✅ MATCH", 2026-01-05 02:48:15.810115 | orchestrator | "", 2026-01-05 02:48:15.810122 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-01-05 02:48:15.810130 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-05 02:48:15.810137 | orchestrator | " Enabled: true", 2026-01-05 02:48:15.810145 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-05 02:48:15.810152 | orchestrator | " Status: ✅ MATCH", 2026-01-05 02:48:15.810159 | orchestrator | "", 2026-01-05 02:48:15.810167 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-01-05 02:48:15.810174 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-01-05 02:48:15.810185 | orchestrator | " Enabled: true", 2026-01-05 02:48:15.810193 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-01-05 02:48:15.810202 | orchestrator | " Status: ✅ MATCH", 2026-01-05 02:48:15.810209 | orchestrator | "", 2026-01-05 02:48:15.810216 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-01-05 02:48:15.810247 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-01-05 02:48:15.810256 | orchestrator | " Enabled: true", 2026-01-05 02:48:15.810264 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-01-05 02:48:15.810272 | orchestrator | " Status: ✅ MATCH", 2026-01-05 02:48:15.810280 | orchestrator | "", 2026-01-05 02:48:15.810288 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-01-05 02:48:15.810295 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2026-01-05 02:48:15.810301 | orchestrator | " Enabled: true", 2026-01-05 02:48:15.810307 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2026-01-05 02:48:15.810313 | orchestrator | " Status: ✅ MATCH", 2026-01-05 02:48:15.810319 | orchestrator | "", 2026-01-05 02:48:15.810325 | orchestrator | "Checking service: redis (Redis Cache)", 2026-01-05 02:48:15.810332 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-01-05 02:48:15.810340 | orchestrator | " Enabled: true", 2026-01-05 02:48:15.810348 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-01-05 02:48:15.810356 | orchestrator | " Status: ✅ MATCH", 2026-01-05 02:48:15.810364 | orchestrator | "", 2026-01-05 02:48:15.810372 | orchestrator | "Checking service: api (OSISM API Service)", 2026-01-05 02:48:15.810380 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-05 02:48:15.810389 | orchestrator | " Enabled: true", 2026-01-05 02:48:15.810417 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-05 02:48:15.810423 | orchestrator | " Status: ✅ MATCH", 2026-01-05 02:48:15.810429 | orchestrator | "", 2026-01-05 02:48:15.810435 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-01-05 02:48:15.810443 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-05 02:48:15.810450 | orchestrator | " Enabled: true", 2026-01-05 02:48:15.810458 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-05 02:48:15.810466 | orchestrator | " Status: ✅ MATCH", 2026-01-05 02:48:15.810473 | orchestrator | "", 2026-01-05 02:48:15.810479 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-01-05 02:48:15.810484 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-05 02:48:15.810489 | orchestrator | " Enabled: true", 2026-01-05 02:48:15.810495 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-05 02:48:15.810507 | orchestrator | " Status: ✅ MATCH", 2026-01-05 02:48:15.810513 | orchestrator | "", 2026-01-05 02:48:15.810519 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-01-05 02:48:15.810525 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-05 02:48:15.810531 | orchestrator | " Enabled: true", 2026-01-05 02:48:15.810536 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-05 02:48:15.810542 | orchestrator | " Status: ✅ MATCH", 2026-01-05 02:48:15.810549 | orchestrator | "", 2026-01-05 02:48:15.810555 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-01-05 02:48:15.810580 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-05 02:48:15.810586 | orchestrator | " Enabled: true", 2026-01-05 02:48:15.810592 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-05 02:48:15.810598 | orchestrator | " Status: ✅ MATCH", 2026-01-05 02:48:15.810604 | orchestrator | "", 2026-01-05 02:48:15.810610 | orchestrator | "=== Summary ===", 2026-01-05 02:48:15.810616 | orchestrator | "Errors (version mismatches): 0", 2026-01-05 02:48:15.810621 | orchestrator | "Warnings (expected containers not running): 0", 2026-01-05 02:48:15.810627 | orchestrator | "", 2026-01-05 02:48:15.810633 | orchestrator | "✅ All running containers match expected versions!" 2026-01-05 02:48:15.810639 | orchestrator | ] 2026-01-05 02:48:15.810646 | orchestrator | } 2026-01-05 02:48:15.810653 | orchestrator | 2026-01-05 02:48:15.810660 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-01-05 02:48:15.882826 | orchestrator | skipping: [testbed-manager] 2026-01-05 02:48:15.882909 | orchestrator | 2026-01-05 02:48:15.882935 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 02:48:15.882941 | orchestrator | testbed-manager : ok=51 changed=8 unreachable=0 failed=0 skipped=8 rescued=0 ignored=0 2026-01-05 02:48:15.882946 | orchestrator | 2026-01-05 02:48:28.466112 | orchestrator | 2026-01-05 02:48:28 | INFO  | Task 9297d76e-1070-46f4-a850-3e0f9633f9fe (sync inventory) is running in background. Output coming soon. 2026-01-05 02:48:58.187346 | orchestrator | 2026-01-05 02:48:29 | INFO  | Starting group_vars file reorganization 2026-01-05 02:48:58.187493 | orchestrator | 2026-01-05 02:48:29 | INFO  | Moved 0 file(s) to their respective directories 2026-01-05 02:48:58.187512 | orchestrator | 2026-01-05 02:48:29 | INFO  | Group_vars file reorganization completed 2026-01-05 02:48:58.187523 | orchestrator | 2026-01-05 02:48:33 | INFO  | Starting variable preparation from inventory 2026-01-05 02:48:58.187533 | orchestrator | 2026-01-05 02:48:36 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-01-05 02:48:58.187544 | orchestrator | 2026-01-05 02:48:36 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-01-05 02:48:58.187554 | orchestrator | 2026-01-05 02:48:36 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-01-05 02:48:58.187563 | orchestrator | 2026-01-05 02:48:36 | INFO  | 3 file(s) written, 6 host(s) processed 2026-01-05 02:48:58.187574 | orchestrator | 2026-01-05 02:48:36 | INFO  | Variable preparation completed 2026-01-05 02:48:58.187584 | orchestrator | 2026-01-05 02:48:37 | INFO  | Starting inventory overwrite handling 2026-01-05 02:48:58.187594 | orchestrator | 2026-01-05 02:48:37 | INFO  | Handling group overwrites in 99-overwrite 2026-01-05 02:48:58.187603 | orchestrator | 2026-01-05 02:48:37 | INFO  | Removing group frr:children from 60-generic 2026-01-05 02:48:58.187613 | orchestrator | 2026-01-05 02:48:37 | INFO  | Removing group netbird:children from 50-infrastructure 2026-01-05 02:48:58.187623 | orchestrator | 2026-01-05 02:48:37 | INFO  | Removing group ceph-rgw from 50-ceph 2026-01-05 02:48:58.187632 | orchestrator | 2026-01-05 02:48:37 | INFO  | Removing group ceph-mds from 50-ceph 2026-01-05 02:48:58.187642 | orchestrator | 2026-01-05 02:48:37 | INFO  | Handling group overwrites in 20-roles 2026-01-05 02:48:58.187652 | orchestrator | 2026-01-05 02:48:37 | INFO  | Removing group k3s_node from 50-infrastructure 2026-01-05 02:48:58.187662 | orchestrator | 2026-01-05 02:48:37 | INFO  | Removed 5 group(s) in total 2026-01-05 02:48:58.187672 | orchestrator | 2026-01-05 02:48:37 | INFO  | Inventory overwrite handling completed 2026-01-05 02:48:58.187682 | orchestrator | 2026-01-05 02:48:39 | INFO  | Starting merge of inventory files 2026-01-05 02:48:58.187692 | orchestrator | 2026-01-05 02:48:39 | INFO  | Inventory files merged successfully 2026-01-05 02:48:58.187702 | orchestrator | 2026-01-05 02:48:44 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-01-05 02:48:58.187732 | orchestrator | 2026-01-05 02:48:56 | INFO  | Successfully wrote ClusterShell configuration 2026-01-05 02:48:58.538101 | orchestrator | + [[ '' == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-01-05 02:48:58.538187 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-01-05 02:48:58.538197 | orchestrator | + local max_attempts=60 2026-01-05 02:48:58.538214 | orchestrator | + local name=kolla-ansible 2026-01-05 02:48:58.538223 | orchestrator | + local attempt_num=1 2026-01-05 02:48:58.538541 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-01-05 02:48:58.574517 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-05 02:48:58.574636 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-01-05 02:48:58.574652 | orchestrator | + local max_attempts=60 2026-01-05 02:48:58.574665 | orchestrator | + local name=osism-ansible 2026-01-05 02:48:58.574707 | orchestrator | + local attempt_num=1 2026-01-05 02:48:58.575304 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-01-05 02:48:58.612981 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-05 02:48:58.613078 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-01-05 02:48:58.823328 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-01-05 02:48:58.823410 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:quincy "/entrypoint.sh osis…" ceph-ansible 3 minutes ago Up 2 minutes (healthy) 2026-01-05 02:48:58.823418 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible 3 minutes ago Up 2 minutes (healthy) 2026-01-05 02:48:58.823444 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api 3 minutes ago Up 3 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-01-05 02:48:58.823465 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 hours ago Up 2 minutes (healthy) 8000/tcp 2026-01-05 02:48:58.823471 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat 3 minutes ago Up 3 minutes (healthy) 2026-01-05 02:48:58.823475 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower 3 minutes ago Up 3 minutes (healthy) 2026-01-05 02:48:58.823480 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler 3 minutes ago Up 2 minutes (healthy) 2026-01-05 02:48:58.823485 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener 3 minutes ago Up 3 minutes (healthy) 2026-01-05 02:48:58.823489 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 hours ago Up 3 minutes (healthy) 3306/tcp 2026-01-05 02:48:58.823493 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack 3 minutes ago Up 3 minutes (healthy) 2026-01-05 02:48:58.823498 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 hours ago Up 3 minutes (healthy) 6379/tcp 2026-01-05 02:48:58.823503 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible 3 minutes ago Up 2 minutes (healthy) 2026-01-05 02:48:58.823507 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend 3 minutes ago Up 3 minutes 192.168.16.5:3000->3000/tcp 2026-01-05 02:48:58.823512 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes 3 minutes ago Up 2 minutes (healthy) 2026-01-05 02:48:58.823517 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient 3 minutes ago Up 3 minutes (healthy) 2026-01-05 02:48:58.830675 | orchestrator | + [[ '' == \t\r\u\e ]] 2026-01-05 02:48:58.830766 | orchestrator | + [[ '' == \f\a\l\s\e ]] 2026-01-05 02:48:58.830776 | orchestrator | + sync_inventory 2026-01-05 02:48:58.830783 | orchestrator | + sleep 10 2026-01-05 02:49:08.835807 | orchestrator | ++ semver latest 8.0.0 2026-01-05 02:49:08.908706 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-05 02:49:08.908809 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-05 02:49:08.908819 | orchestrator | + osism sync inventory 2026-01-05 02:49:21.172090 | orchestrator | 2026-01-05 02:49:21 | INFO  | Task e3c4ec5a-f9fa-490c-80c8-629601308dd7 (sync inventory) is running in background. Output coming soon. 2026-01-05 02:49:51.986806 | orchestrator | 2026-01-05 02:49:22 | INFO  | Starting group_vars file reorganization 2026-01-05 02:49:51.986875 | orchestrator | 2026-01-05 02:49:22 | INFO  | Moved 0 file(s) to their respective directories 2026-01-05 02:49:51.986882 | orchestrator | 2026-01-05 02:49:22 | INFO  | Group_vars file reorganization completed 2026-01-05 02:49:51.986888 | orchestrator | 2026-01-05 02:49:25 | INFO  | Starting variable preparation from inventory 2026-01-05 02:49:51.986895 | orchestrator | 2026-01-05 02:49:28 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-01-05 02:49:51.986902 | orchestrator | 2026-01-05 02:49:28 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-01-05 02:49:51.986909 | orchestrator | 2026-01-05 02:49:28 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-01-05 02:49:51.986916 | orchestrator | 2026-01-05 02:49:28 | INFO  | 3 file(s) written, 6 host(s) processed 2026-01-05 02:49:51.986922 | orchestrator | 2026-01-05 02:49:28 | INFO  | Variable preparation completed 2026-01-05 02:49:51.986926 | orchestrator | 2026-01-05 02:49:30 | INFO  | Starting inventory overwrite handling 2026-01-05 02:49:51.986932 | orchestrator | 2026-01-05 02:49:30 | INFO  | Handling group overwrites in 99-overwrite 2026-01-05 02:49:51.986938 | orchestrator | 2026-01-05 02:49:30 | INFO  | Removing group frr:children from 60-generic 2026-01-05 02:49:51.986945 | orchestrator | 2026-01-05 02:49:30 | INFO  | Removing group netbird:children from 50-infrastructure 2026-01-05 02:49:51.986952 | orchestrator | 2026-01-05 02:49:30 | INFO  | Removing group ceph-mds from 50-ceph 2026-01-05 02:49:51.986959 | orchestrator | 2026-01-05 02:49:30 | INFO  | Removing group ceph-rgw from 50-ceph 2026-01-05 02:49:51.986965 | orchestrator | 2026-01-05 02:49:30 | INFO  | Handling group overwrites in 20-roles 2026-01-05 02:49:51.986972 | orchestrator | 2026-01-05 02:49:30 | INFO  | Removing group k3s_node from 50-infrastructure 2026-01-05 02:49:51.986979 | orchestrator | 2026-01-05 02:49:30 | INFO  | Removed 5 group(s) in total 2026-01-05 02:49:51.986985 | orchestrator | 2026-01-05 02:49:30 | INFO  | Inventory overwrite handling completed 2026-01-05 02:49:51.986993 | orchestrator | 2026-01-05 02:49:32 | INFO  | Starting merge of inventory files 2026-01-05 02:49:51.987000 | orchestrator | 2026-01-05 02:49:32 | INFO  | Inventory files merged successfully 2026-01-05 02:49:51.987008 | orchestrator | 2026-01-05 02:49:38 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-01-05 02:49:51.987015 | orchestrator | 2026-01-05 02:49:50 | INFO  | Successfully wrote ClusterShell configuration 2026-01-05 02:49:52.427732 | orchestrator | + osism apply facts 2026-01-05 02:50:04.669193 | orchestrator | 2026-01-05 02:50:04 | INFO  | Task c1df030c-a9a2-4cbc-83e8-d405e97fb517 (facts) was prepared for execution. 2026-01-05 02:50:04.669306 | orchestrator | 2026-01-05 02:50:04 | INFO  | It takes a moment until task c1df030c-a9a2-4cbc-83e8-d405e97fb517 (facts) has been started and output is visible here. 2026-01-05 02:50:28.419531 | orchestrator | 2026-01-05 02:50:28.419641 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-01-05 02:50:28.419653 | orchestrator | 2026-01-05 02:50:28.419661 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-05 02:50:28.419669 | orchestrator | Monday 05 January 2026 02:50:11 +0000 (0:00:02.064) 0:00:02.064 ******** 2026-01-05 02:50:28.419701 | orchestrator | ok: [testbed-manager] 2026-01-05 02:50:28.419710 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:50:28.419717 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:50:28.419724 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:50:28.419730 | orchestrator | ok: [testbed-node-4] 2026-01-05 02:50:28.419737 | orchestrator | ok: [testbed-node-3] 2026-01-05 02:50:28.419744 | orchestrator | ok: [testbed-node-5] 2026-01-05 02:50:28.419751 | orchestrator | 2026-01-05 02:50:28.419757 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-05 02:50:28.419764 | orchestrator | Monday 05 January 2026 02:50:14 +0000 (0:00:03.522) 0:00:05.587 ******** 2026-01-05 02:50:28.419772 | orchestrator | skipping: [testbed-manager] 2026-01-05 02:50:28.419780 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:50:28.419787 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:50:28.419794 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:50:28.419801 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:50:28.419807 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:50:28.419814 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:50:28.419821 | orchestrator | 2026-01-05 02:50:28.419828 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-05 02:50:28.419835 | orchestrator | 2026-01-05 02:50:28.419842 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-05 02:50:28.419849 | orchestrator | Monday 05 January 2026 02:50:17 +0000 (0:00:02.705) 0:00:08.292 ******** 2026-01-05 02:50:28.419855 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:50:28.419862 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:50:28.419869 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:50:28.419876 | orchestrator | ok: [testbed-manager] 2026-01-05 02:50:28.419883 | orchestrator | ok: [testbed-node-3] 2026-01-05 02:50:28.419890 | orchestrator | ok: [testbed-node-4] 2026-01-05 02:50:28.419897 | orchestrator | ok: [testbed-node-5] 2026-01-05 02:50:28.419904 | orchestrator | 2026-01-05 02:50:28.419911 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-05 02:50:28.419918 | orchestrator | 2026-01-05 02:50:28.419925 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-05 02:50:28.419932 | orchestrator | Monday 05 January 2026 02:50:24 +0000 (0:00:07.334) 0:00:15.627 ******** 2026-01-05 02:50:28.419939 | orchestrator | skipping: [testbed-manager] 2026-01-05 02:50:28.419946 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:50:28.419953 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:50:28.419960 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:50:28.419967 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:50:28.419992 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:50:28.419999 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:50:28.420006 | orchestrator | 2026-01-05 02:50:28.420014 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 02:50:28.420021 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 02:50:28.420030 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 02:50:28.420037 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 02:50:28.420044 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 02:50:28.420052 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 02:50:28.420058 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 02:50:28.420072 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 02:50:28.420079 | orchestrator | 2026-01-05 02:50:28.420086 | orchestrator | 2026-01-05 02:50:28.420094 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 02:50:28.420101 | orchestrator | Monday 05 January 2026 02:50:27 +0000 (0:00:02.954) 0:00:18.582 ******** 2026-01-05 02:50:28.420108 | orchestrator | =============================================================================== 2026-01-05 02:50:28.420115 | orchestrator | Gathers facts about hosts ----------------------------------------------- 7.33s 2026-01-05 02:50:28.420123 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 3.52s 2026-01-05 02:50:28.420130 | orchestrator | Gather facts for all hosts ---------------------------------------------- 2.95s 2026-01-05 02:50:28.420138 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 2.71s 2026-01-05 02:50:28.754152 | orchestrator | + sh -c /opt/configuration/scripts/upgrade-services.sh 2026-01-05 02:50:28.764578 | orchestrator | + set -e 2026-01-05 02:50:28.764670 | orchestrator | + source /opt/manager-vars.sh 2026-01-05 02:50:28.765663 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-05 02:50:28.765727 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-05 02:50:28.765740 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-05 02:50:28.765749 | orchestrator | ++ CEPH_VERSION=reef 2026-01-05 02:50:28.765758 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-05 02:50:28.765770 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-05 02:50:28.765779 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-01-05 02:50:28.765786 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-01-05 02:50:28.765792 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-05 02:50:28.765798 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-05 02:50:28.765807 | orchestrator | ++ export ARA=false 2026-01-05 02:50:28.765816 | orchestrator | ++ ARA=false 2026-01-05 02:50:28.765825 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-05 02:50:28.765834 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-05 02:50:28.765842 | orchestrator | ++ export TEMPEST=false 2026-01-05 02:50:28.765852 | orchestrator | ++ TEMPEST=false 2026-01-05 02:50:28.765907 | orchestrator | ++ export IS_ZUUL=true 2026-01-05 02:50:28.765915 | orchestrator | ++ IS_ZUUL=true 2026-01-05 02:50:28.765920 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.95 2026-01-05 02:50:28.765925 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.95 2026-01-05 02:50:28.765931 | orchestrator | ++ export EXTERNAL_API=false 2026-01-05 02:50:28.765936 | orchestrator | ++ EXTERNAL_API=false 2026-01-05 02:50:28.765941 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-05 02:50:28.765947 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-05 02:50:28.765952 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-05 02:50:28.765957 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-05 02:50:28.765962 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-05 02:50:28.765968 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-05 02:50:28.765973 | orchestrator | ++ export SKIP_CEPH_UPGRADE=true 2026-01-05 02:50:28.765978 | orchestrator | ++ SKIP_CEPH_UPGRADE=true 2026-01-05 02:50:28.765983 | orchestrator | ++ export SKIP_OPENSTACK_UPGRADE=false 2026-01-05 02:50:28.765988 | orchestrator | ++ SKIP_OPENSTACK_UPGRADE=false 2026-01-05 02:50:28.765994 | orchestrator | + SKIP_OPENSTACK_UPGRADE=false 2026-01-05 02:50:28.765999 | orchestrator | + SKIP_CEPH_UPGRADE=true 2026-01-05 02:50:28.766093 | orchestrator | + sh -c /opt/configuration/scripts/pull-images.sh 2026-01-05 02:50:28.776275 | orchestrator | + set -e 2026-01-05 02:50:28.776361 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-05 02:50:28.777401 | orchestrator | ++ export INTERACTIVE=false 2026-01-05 02:50:28.777447 | orchestrator | ++ INTERACTIVE=false 2026-01-05 02:50:28.777457 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-05 02:50:28.777465 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-05 02:50:28.777506 | orchestrator | + source /opt/manager-vars.sh 2026-01-05 02:50:28.777517 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-05 02:50:28.777527 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-05 02:50:28.777533 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-05 02:50:28.777538 | orchestrator | ++ CEPH_VERSION=reef 2026-01-05 02:50:28.777544 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-05 02:50:28.777550 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-05 02:50:28.777556 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-01-05 02:50:28.777561 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-01-05 02:50:28.777567 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-05 02:50:28.777596 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-05 02:50:28.777602 | orchestrator | ++ export ARA=false 2026-01-05 02:50:28.777607 | orchestrator | ++ ARA=false 2026-01-05 02:50:28.777613 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-05 02:50:28.777618 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-05 02:50:28.777623 | orchestrator | ++ export TEMPEST=false 2026-01-05 02:50:28.777629 | orchestrator | ++ TEMPEST=false 2026-01-05 02:50:28.777634 | orchestrator | ++ export IS_ZUUL=true 2026-01-05 02:50:28.777639 | orchestrator | ++ IS_ZUUL=true 2026-01-05 02:50:28.777644 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.95 2026-01-05 02:50:28.777649 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.95 2026-01-05 02:50:28.777655 | orchestrator | ++ export EXTERNAL_API=false 2026-01-05 02:50:28.777660 | orchestrator | ++ EXTERNAL_API=false 2026-01-05 02:50:28.777665 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-05 02:50:28.777670 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-05 02:50:28.777675 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-05 02:50:28.777680 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-05 02:50:28.777686 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-05 02:50:28.777691 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-05 02:50:28.777696 | orchestrator | ++ export SKIP_CEPH_UPGRADE=true 2026-01-05 02:50:28.777701 | orchestrator | ++ SKIP_CEPH_UPGRADE=true 2026-01-05 02:50:28.777706 | orchestrator | ++ export SKIP_OPENSTACK_UPGRADE=false 2026-01-05 02:50:28.777711 | orchestrator | ++ SKIP_OPENSTACK_UPGRADE=false 2026-01-05 02:50:28.777790 | orchestrator | 2026-01-05 02:50:28.777801 | orchestrator | # PULL IMAGES 2026-01-05 02:50:28.777810 | orchestrator | 2026-01-05 02:50:28.777819 | orchestrator | + echo 2026-01-05 02:50:28.777827 | orchestrator | + echo '# PULL IMAGES' 2026-01-05 02:50:28.777832 | orchestrator | + echo 2026-01-05 02:50:28.778903 | orchestrator | ++ semver 9.5.0 7.0.0 2026-01-05 02:50:28.846828 | orchestrator | + [[ 1 -ge 0 ]] 2026-01-05 02:50:28.846906 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-01-05 02:50:30.910423 | orchestrator | 2026-01-05 02:50:30 | INFO  | Trying to run play pull-images in environment custom 2026-01-05 02:50:41.058602 | orchestrator | 2026-01-05 02:50:41 | INFO  | Task f5d38602-2f3e-441e-a619-3cc09e8313f5 (pull-images) was prepared for execution. 2026-01-05 02:50:41.058735 | orchestrator | 2026-01-05 02:50:41 | INFO  | Task f5d38602-2f3e-441e-a619-3cc09e8313f5 is running in background. No more output. Check ARA for logs. 2026-01-05 02:50:41.410474 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/500-kubernetes.sh 2026-01-05 02:50:41.423173 | orchestrator | + set -e 2026-01-05 02:50:41.423304 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-05 02:50:41.423333 | orchestrator | ++ export INTERACTIVE=false 2026-01-05 02:50:41.423353 | orchestrator | ++ INTERACTIVE=false 2026-01-05 02:50:41.423407 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-05 02:50:41.423427 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-05 02:50:41.423446 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-01-05 02:50:41.425322 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-01-05 02:50:41.437103 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-05 02:50:41.437188 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-05 02:50:41.437198 | orchestrator | ++ semver latest 8.0.3 2026-01-05 02:50:41.502547 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-05 02:50:41.502673 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-05 02:50:41.502691 | orchestrator | + osism apply frr 2026-01-05 02:50:53.825133 | orchestrator | 2026-01-05 02:50:53 | INFO  | Task 9e71f9c5-d3bb-4d34-855f-af7949e05445 (frr) was prepared for execution. 2026-01-05 02:50:53.825282 | orchestrator | 2026-01-05 02:50:53 | INFO  | It takes a moment until task 9e71f9c5-d3bb-4d34-855f-af7949e05445 (frr) has been started and output is visible here. 2026-01-05 02:51:28.772826 | orchestrator | 2026-01-05 02:51:28.772937 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-01-05 02:51:28.772951 | orchestrator | 2026-01-05 02:51:28.772959 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-01-05 02:51:28.772967 | orchestrator | Monday 05 January 2026 02:51:03 +0000 (0:00:04.807) 0:00:04.807 ******** 2026-01-05 02:51:28.772975 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-01-05 02:51:28.773010 | orchestrator | 2026-01-05 02:51:28.773017 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-01-05 02:51:28.773025 | orchestrator | Monday 05 January 2026 02:51:05 +0000 (0:00:02.319) 0:00:07.127 ******** 2026-01-05 02:51:28.773032 | orchestrator | ok: [testbed-manager] 2026-01-05 02:51:28.773041 | orchestrator | 2026-01-05 02:51:28.773048 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-01-05 02:51:28.773056 | orchestrator | Monday 05 January 2026 02:51:08 +0000 (0:00:02.572) 0:00:09.700 ******** 2026-01-05 02:51:28.773065 | orchestrator | ok: [testbed-manager] 2026-01-05 02:51:28.773073 | orchestrator | 2026-01-05 02:51:28.773080 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-01-05 02:51:28.773087 | orchestrator | Monday 05 January 2026 02:51:11 +0000 (0:00:03.102) 0:00:12.802 ******** 2026-01-05 02:51:28.773094 | orchestrator | ok: [testbed-manager] 2026-01-05 02:51:28.773101 | orchestrator | 2026-01-05 02:51:28.773108 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-01-05 02:51:28.773115 | orchestrator | Monday 05 January 2026 02:51:13 +0000 (0:00:01.957) 0:00:14.760 ******** 2026-01-05 02:51:28.773122 | orchestrator | ok: [testbed-manager] 2026-01-05 02:51:28.773129 | orchestrator | 2026-01-05 02:51:28.773136 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-01-05 02:51:28.773144 | orchestrator | Monday 05 January 2026 02:51:15 +0000 (0:00:01.964) 0:00:16.725 ******** 2026-01-05 02:51:28.773151 | orchestrator | ok: [testbed-manager] 2026-01-05 02:51:28.773158 | orchestrator | 2026-01-05 02:51:28.773166 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-01-05 02:51:28.773197 | orchestrator | Monday 05 January 2026 02:51:17 +0000 (0:00:02.392) 0:00:19.117 ******** 2026-01-05 02:51:28.773205 | orchestrator | skipping: [testbed-manager] 2026-01-05 02:51:28.773213 | orchestrator | 2026-01-05 02:51:28.773220 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-01-05 02:51:28.773227 | orchestrator | Monday 05 January 2026 02:51:18 +0000 (0:00:01.138) 0:00:20.256 ******** 2026-01-05 02:51:28.773235 | orchestrator | skipping: [testbed-manager] 2026-01-05 02:51:28.773242 | orchestrator | 2026-01-05 02:51:28.773249 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-01-05 02:51:28.773256 | orchestrator | Monday 05 January 2026 02:51:20 +0000 (0:00:01.125) 0:00:21.382 ******** 2026-01-05 02:51:28.773263 | orchestrator | ok: [testbed-manager] 2026-01-05 02:51:28.773270 | orchestrator | 2026-01-05 02:51:28.773277 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-01-05 02:51:28.773285 | orchestrator | Monday 05 January 2026 02:51:22 +0000 (0:00:02.037) 0:00:23.419 ******** 2026-01-05 02:51:28.773292 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-01-05 02:51:28.773299 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-01-05 02:51:28.773309 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-01-05 02:51:28.773316 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-01-05 02:51:28.773324 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-01-05 02:51:28.773331 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-01-05 02:51:28.773339 | orchestrator | 2026-01-05 02:51:28.773347 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-01-05 02:51:28.773354 | orchestrator | Monday 05 January 2026 02:51:25 +0000 (0:00:03.688) 0:00:27.108 ******** 2026-01-05 02:51:28.773362 | orchestrator | ok: [testbed-manager] 2026-01-05 02:51:28.773368 | orchestrator | 2026-01-05 02:51:28.773375 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 02:51:28.773382 | orchestrator | testbed-manager : ok=9  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 02:51:28.773401 | orchestrator | 2026-01-05 02:51:28.773408 | orchestrator | 2026-01-05 02:51:28.773416 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 02:51:28.773423 | orchestrator | Monday 05 January 2026 02:51:28 +0000 (0:00:02.619) 0:00:29.727 ******** 2026-01-05 02:51:28.773430 | orchestrator | =============================================================================== 2026-01-05 02:51:28.773437 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 3.69s 2026-01-05 02:51:28.773445 | orchestrator | osism.services.frr : Install frr package -------------------------------- 3.10s 2026-01-05 02:51:28.773452 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 2.62s 2026-01-05 02:51:28.773459 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 2.57s 2026-01-05 02:51:28.773466 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 2.39s 2026-01-05 02:51:28.773474 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 2.32s 2026-01-05 02:51:28.773482 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 2.04s 2026-01-05 02:51:28.773489 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.96s 2026-01-05 02:51:28.773517 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.96s 2026-01-05 02:51:28.773550 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 1.14s 2026-01-05 02:51:28.773558 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 1.13s 2026-01-05 02:51:29.146772 | orchestrator | + osism apply kubernetes 2026-01-05 02:51:31.487321 | orchestrator | 2026-01-05 02:51:31 | INFO  | Task 88cae4a1-c973-4d9d-ba47-7332d1b11ef7 (kubernetes) was prepared for execution. 2026-01-05 02:51:31.487394 | orchestrator | 2026-01-05 02:51:31 | INFO  | It takes a moment until task 88cae4a1-c973-4d9d-ba47-7332d1b11ef7 (kubernetes) has been started and output is visible here. 2026-01-05 02:51:56.391201 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-01-05 02:51:56.391366 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-01-05 02:51:56.391386 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-01-05 02:51:56.391393 | orchestrator | (): 'NoneType' object is not subscriptable 2026-01-05 02:51:56.391407 | orchestrator | 2026-01-05 02:51:56.391414 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-01-05 02:51:56.391420 | orchestrator | 2026-01-05 02:51:56.391437 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-01-05 02:51:56.391447 | orchestrator | Monday 05 January 2026 02:51:37 +0000 (0:00:01.747) 0:00:01.748 ******** 2026-01-05 02:51:56.391454 | orchestrator | ok: [testbed-node-3] 2026-01-05 02:51:56.391461 | orchestrator | ok: [testbed-node-4] 2026-01-05 02:51:56.391467 | orchestrator | ok: [testbed-node-5] 2026-01-05 02:51:56.391474 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:51:56.391480 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:51:56.391486 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:51:56.391492 | orchestrator | 2026-01-05 02:51:56.391499 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-01-05 02:51:56.391505 | orchestrator | Monday 05 January 2026 02:51:40 +0000 (0:00:02.317) 0:00:04.065 ******** 2026-01-05 02:51:56.391511 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:51:56.391518 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:51:56.391524 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:51:56.391530 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:51:56.391538 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:51:56.391593 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:51:56.391602 | orchestrator | 2026-01-05 02:51:56.391608 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-01-05 02:51:56.391615 | orchestrator | Monday 05 January 2026 02:51:40 +0000 (0:00:00.716) 0:00:04.782 ******** 2026-01-05 02:51:56.391621 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:51:56.391627 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:51:56.391633 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:51:56.391639 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:51:56.391645 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:51:56.391651 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:51:56.391658 | orchestrator | 2026-01-05 02:51:56.391664 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-01-05 02:51:56.391670 | orchestrator | Monday 05 January 2026 02:51:41 +0000 (0:00:00.832) 0:00:05.614 ******** 2026-01-05 02:51:56.391676 | orchestrator | ok: [testbed-node-4] 2026-01-05 02:51:56.391682 | orchestrator | ok: [testbed-node-3] 2026-01-05 02:51:56.391688 | orchestrator | ok: [testbed-node-5] 2026-01-05 02:51:56.391695 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:51:56.391701 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:51:56.391707 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:51:56.391713 | orchestrator | 2026-01-05 02:51:56.391721 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-01-05 02:51:56.391728 | orchestrator | Monday 05 January 2026 02:51:44 +0000 (0:00:02.246) 0:00:07.860 ******** 2026-01-05 02:51:56.391736 | orchestrator | ok: [testbed-node-3] 2026-01-05 02:51:56.391743 | orchestrator | ok: [testbed-node-4] 2026-01-05 02:51:56.391750 | orchestrator | ok: [testbed-node-5] 2026-01-05 02:51:56.391757 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:51:56.391765 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:51:56.391773 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:51:56.391780 | orchestrator | 2026-01-05 02:51:56.391787 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-01-05 02:51:56.391795 | orchestrator | Monday 05 January 2026 02:51:45 +0000 (0:00:01.289) 0:00:09.150 ******** 2026-01-05 02:51:56.391802 | orchestrator | ok: [testbed-node-3] 2026-01-05 02:51:56.391810 | orchestrator | ok: [testbed-node-4] 2026-01-05 02:51:56.391817 | orchestrator | ok: [testbed-node-5] 2026-01-05 02:51:56.391824 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:51:56.391832 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:51:56.391839 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:51:56.391846 | orchestrator | 2026-01-05 02:51:56.391853 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-01-05 02:51:56.391861 | orchestrator | Monday 05 January 2026 02:51:46 +0000 (0:00:01.066) 0:00:10.216 ******** 2026-01-05 02:51:56.391868 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:51:56.391875 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:51:56.391882 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:51:56.391889 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:51:56.391896 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:51:56.391904 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:51:56.391912 | orchestrator | 2026-01-05 02:51:56.391919 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-01-05 02:51:56.391927 | orchestrator | Monday 05 January 2026 02:51:47 +0000 (0:00:00.824) 0:00:11.041 ******** 2026-01-05 02:51:56.391934 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:51:56.391942 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:51:56.391949 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:51:56.391957 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:51:56.391965 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:51:56.391972 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:51:56.391979 | orchestrator | 2026-01-05 02:51:56.391987 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-01-05 02:51:56.391993 | orchestrator | Monday 05 January 2026 02:51:47 +0000 (0:00:00.640) 0:00:11.681 ******** 2026-01-05 02:51:56.392005 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-05 02:51:56.392012 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-05 02:51:56.392018 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:51:56.392024 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-05 02:51:56.392034 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-05 02:51:56.392064 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:51:56.392076 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-05 02:51:56.392087 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-05 02:51:56.392098 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:51:56.392109 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-05 02:51:56.392118 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-05 02:51:56.392124 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:51:56.392131 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-05 02:51:56.392137 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-05 02:51:56.392143 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:51:56.392149 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-05 02:51:56.392156 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-05 02:51:56.392162 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:51:56.392168 | orchestrator | 2026-01-05 02:51:56.392174 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-01-05 02:51:56.392180 | orchestrator | Monday 05 January 2026 02:51:48 +0000 (0:00:00.903) 0:00:12.585 ******** 2026-01-05 02:51:56.392186 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:51:56.392192 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:51:56.392199 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:51:56.392205 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:51:56.392211 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:51:56.392217 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:51:56.392223 | orchestrator | 2026-01-05 02:51:56.392230 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-01-05 02:51:56.392237 | orchestrator | Monday 05 January 2026 02:51:50 +0000 (0:00:01.308) 0:00:13.893 ******** 2026-01-05 02:51:56.392248 | orchestrator | ok: [testbed-node-3] 2026-01-05 02:51:56.392254 | orchestrator | ok: [testbed-node-4] 2026-01-05 02:51:56.392260 | orchestrator | ok: [testbed-node-5] 2026-01-05 02:51:56.392267 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:51:56.392273 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:51:56.392279 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:51:56.392285 | orchestrator | 2026-01-05 02:51:56.392291 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-01-05 02:51:56.392297 | orchestrator | Monday 05 January 2026 02:51:50 +0000 (0:00:00.871) 0:00:14.764 ******** 2026-01-05 02:51:56.392303 | orchestrator | ok: [testbed-node-3] 2026-01-05 02:51:56.392310 | orchestrator | ok: [testbed-node-5] 2026-01-05 02:51:56.392316 | orchestrator | ok: [testbed-node-4] 2026-01-05 02:51:56.392322 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:51:56.392328 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:51:56.392334 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:51:56.392340 | orchestrator | 2026-01-05 02:51:56.392346 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-01-05 02:51:56.392353 | orchestrator | Monday 05 January 2026 02:51:52 +0000 (0:00:01.489) 0:00:16.254 ******** 2026-01-05 02:51:56.392359 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:51:56.392365 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:51:56.392377 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:51:56.392383 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:51:56.392389 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:51:56.392396 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:51:56.392402 | orchestrator | 2026-01-05 02:51:56.392408 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-01-05 02:51:56.392414 | orchestrator | Monday 05 January 2026 02:51:53 +0000 (0:00:00.949) 0:00:17.203 ******** 2026-01-05 02:51:56.392421 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:51:56.392427 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:51:56.392433 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:51:56.392439 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:51:56.392445 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:51:56.392451 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:51:56.392457 | orchestrator | 2026-01-05 02:51:56.392468 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-01-05 02:51:56.392477 | orchestrator | Monday 05 January 2026 02:51:54 +0000 (0:00:01.349) 0:00:18.553 ******** 2026-01-05 02:51:56.392483 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:51:56.392489 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:51:56.392495 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:51:56.392501 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:51:56.392507 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:51:56.392514 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:51:56.392520 | orchestrator | 2026-01-05 02:51:56.392526 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-01-05 02:51:56.392532 | orchestrator | Monday 05 January 2026 02:51:55 +0000 (0:00:00.807) 0:00:19.361 ******** 2026-01-05 02:51:56.392538 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-01-05 02:51:56.392561 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-01-05 02:51:56.392568 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:51:56.392574 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-01-05 02:51:56.392581 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-01-05 02:51:56.392587 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:51:56.392593 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-01-05 02:51:56.392599 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-01-05 02:51:56.392608 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:51:56.392617 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-01-05 02:51:56.392626 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-01-05 02:51:56.392636 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:51:56.392645 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-01-05 02:51:56.392664 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-01-05 02:53:09.335010 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:53:09.335070 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-01-05 02:53:09.335077 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-01-05 02:53:09.335083 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:53:09.335089 | orchestrator | 2026-01-05 02:53:09.335096 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-01-05 02:53:09.335103 | orchestrator | Monday 05 January 2026 02:51:56 +0000 (0:00:00.948) 0:00:20.310 ******** 2026-01-05 02:53:09.335109 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:53:09.335115 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:53:09.335121 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:53:09.335127 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:53:09.335133 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:53:09.335139 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:53:09.335145 | orchestrator | 2026-01-05 02:53:09.335161 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-01-05 02:53:09.335183 | orchestrator | Monday 05 January 2026 02:51:57 +0000 (0:00:00.622) 0:00:20.932 ******** 2026-01-05 02:53:09.335190 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:53:09.335196 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:53:09.335203 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:53:09.335209 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:53:09.335216 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:53:09.335223 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:53:09.335227 | orchestrator | 2026-01-05 02:53:09.335231 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-01-05 02:53:09.335235 | orchestrator | 2026-01-05 02:53:09.335239 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-01-05 02:53:09.335243 | orchestrator | Monday 05 January 2026 02:51:59 +0000 (0:00:01.954) 0:00:22.887 ******** 2026-01-05 02:53:09.335247 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:53:09.335251 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:53:09.335255 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:53:09.335258 | orchestrator | 2026-01-05 02:53:09.335262 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-01-05 02:53:09.335266 | orchestrator | Monday 05 January 2026 02:51:59 +0000 (0:00:00.852) 0:00:23.740 ******** 2026-01-05 02:53:09.335270 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:53:09.335274 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:53:09.335283 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:53:09.335287 | orchestrator | 2026-01-05 02:53:09.335291 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-01-05 02:53:09.335295 | orchestrator | Monday 05 January 2026 02:52:01 +0000 (0:00:01.160) 0:00:24.900 ******** 2026-01-05 02:53:09.335299 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:53:09.335302 | orchestrator | changed: [testbed-node-1] 2026-01-05 02:53:09.335306 | orchestrator | changed: [testbed-node-2] 2026-01-05 02:53:09.335310 | orchestrator | 2026-01-05 02:53:09.335314 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-01-05 02:53:09.335317 | orchestrator | Monday 05 January 2026 02:52:02 +0000 (0:00:01.202) 0:00:26.103 ******** 2026-01-05 02:53:09.335321 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:53:09.335325 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:53:09.335332 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:53:09.335338 | orchestrator | 2026-01-05 02:53:09.335344 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-01-05 02:53:09.335351 | orchestrator | Monday 05 January 2026 02:52:03 +0000 (0:00:01.048) 0:00:27.151 ******** 2026-01-05 02:53:09.335358 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:53:09.335364 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:53:09.335371 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:53:09.335378 | orchestrator | 2026-01-05 02:53:09.335384 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-01-05 02:53:09.335391 | orchestrator | Monday 05 January 2026 02:52:03 +0000 (0:00:00.346) 0:00:27.498 ******** 2026-01-05 02:53:09.335396 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:53:09.335400 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:53:09.335404 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:53:09.335408 | orchestrator | 2026-01-05 02:53:09.335412 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-01-05 02:53:09.335416 | orchestrator | Monday 05 January 2026 02:52:04 +0000 (0:00:00.751) 0:00:28.250 ******** 2026-01-05 02:53:09.335419 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:53:09.335423 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:53:09.335427 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:53:09.335431 | orchestrator | 2026-01-05 02:53:09.335436 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-01-05 02:53:09.335442 | orchestrator | Monday 05 January 2026 02:52:05 +0000 (0:00:01.357) 0:00:29.607 ******** 2026-01-05 02:53:09.335449 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 02:53:09.335464 | orchestrator | 2026-01-05 02:53:09.335470 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-01-05 02:53:09.335476 | orchestrator | Monday 05 January 2026 02:52:06 +0000 (0:00:01.024) 0:00:30.632 ******** 2026-01-05 02:53:09.335482 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:53:09.335488 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:53:09.335495 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:53:09.335500 | orchestrator | 2026-01-05 02:53:09.335506 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-01-05 02:53:09.335512 | orchestrator | Monday 05 January 2026 02:52:09 +0000 (0:00:02.444) 0:00:33.077 ******** 2026-01-05 02:53:09.335518 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:53:09.335524 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:53:09.335530 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:53:09.335534 | orchestrator | 2026-01-05 02:53:09.335537 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-01-05 02:53:09.335541 | orchestrator | Monday 05 January 2026 02:52:10 +0000 (0:00:00.807) 0:00:33.885 ******** 2026-01-05 02:53:09.335545 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:53:09.335549 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:53:09.335553 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:53:09.335557 | orchestrator | 2026-01-05 02:53:09.335571 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-01-05 02:53:09.335576 | orchestrator | Monday 05 January 2026 02:52:11 +0000 (0:00:00.981) 0:00:34.866 ******** 2026-01-05 02:53:09.335580 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:53:09.335583 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:53:09.335587 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:53:09.335591 | orchestrator | 2026-01-05 02:53:09.335595 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-01-05 02:53:09.335601 | orchestrator | Monday 05 January 2026 02:52:12 +0000 (0:00:01.796) 0:00:36.662 ******** 2026-01-05 02:53:09.335657 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:53:09.335664 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:53:09.335670 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:53:09.335686 | orchestrator | 2026-01-05 02:53:09.335694 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-01-05 02:53:09.335701 | orchestrator | Monday 05 January 2026 02:52:13 +0000 (0:00:00.438) 0:00:37.101 ******** 2026-01-05 02:53:09.335708 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:53:09.335715 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:53:09.335720 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:53:09.335724 | orchestrator | 2026-01-05 02:53:09.335729 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-01-05 02:53:09.335733 | orchestrator | Monday 05 January 2026 02:52:13 +0000 (0:00:00.526) 0:00:37.628 ******** 2026-01-05 02:53:09.335738 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:53:09.335743 | orchestrator | changed: [testbed-node-1] 2026-01-05 02:53:09.335747 | orchestrator | changed: [testbed-node-2] 2026-01-05 02:53:09.335751 | orchestrator | 2026-01-05 02:53:09.335756 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-01-05 02:53:09.335760 | orchestrator | Monday 05 January 2026 02:52:15 +0000 (0:00:01.732) 0:00:39.361 ******** 2026-01-05 02:53:09.335765 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:53:09.335769 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:53:09.335774 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:53:09.335779 | orchestrator | 2026-01-05 02:53:09.335783 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-01-05 02:53:09.335787 | orchestrator | Monday 05 January 2026 02:52:16 +0000 (0:00:00.916) 0:00:40.278 ******** 2026-01-05 02:53:09.335792 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:53:09.335796 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:53:09.335801 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:53:09.335810 | orchestrator | 2026-01-05 02:53:09.335816 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-01-05 02:53:09.335823 | orchestrator | Monday 05 January 2026 02:52:16 +0000 (0:00:00.303) 0:00:40.581 ******** 2026-01-05 02:53:09.335837 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-05 02:53:09.335846 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-05 02:53:09.335853 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-05 02:53:09.335860 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-05 02:53:09.335867 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-05 02:53:09.335873 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-05 02:53:09.335879 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:53:09.335883 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:53:09.335888 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:53:09.335893 | orchestrator | 2026-01-05 02:53:09.335898 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-01-05 02:53:09.335902 | orchestrator | Monday 05 January 2026 02:52:39 +0000 (0:00:22.230) 0:01:02.812 ******** 2026-01-05 02:53:09.335907 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:53:09.335911 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:53:09.335916 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:53:09.335921 | orchestrator | 2026-01-05 02:53:09.335933 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-01-05 02:53:09.335938 | orchestrator | Monday 05 January 2026 02:52:39 +0000 (0:00:00.603) 0:01:03.415 ******** 2026-01-05 02:53:09.335942 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:53:09.335947 | orchestrator | changed: [testbed-node-1] 2026-01-05 02:53:09.335952 | orchestrator | changed: [testbed-node-2] 2026-01-05 02:53:09.335956 | orchestrator | 2026-01-05 02:53:09.335961 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-01-05 02:53:09.335965 | orchestrator | Monday 05 January 2026 02:52:41 +0000 (0:00:01.547) 0:01:04.963 ******** 2026-01-05 02:53:09.335970 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:53:09.335974 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:53:09.335978 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:53:09.335982 | orchestrator | 2026-01-05 02:53:09.335986 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-01-05 02:53:09.335990 | orchestrator | Monday 05 January 2026 02:52:42 +0000 (0:00:01.553) 0:01:06.517 ******** 2026-01-05 02:53:09.335994 | orchestrator | changed: [testbed-node-1] 2026-01-05 02:53:09.336001 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:53:09.336007 | orchestrator | changed: [testbed-node-2] 2026-01-05 02:53:09.336013 | orchestrator | 2026-01-05 02:53:09.336019 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-01-05 02:53:09.336026 | orchestrator | Monday 05 January 2026 02:53:08 +0000 (0:00:25.824) 0:01:32.342 ******** 2026-01-05 02:53:09.336032 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:53:09.336039 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:53:09.336046 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:53:09.336053 | orchestrator | 2026-01-05 02:53:09.336067 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-01-05 02:53:31.257438 | orchestrator | Monday 05 January 2026 02:53:09 +0000 (0:00:00.772) 0:01:33.114 ******** 2026-01-05 02:53:31.257610 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:53:31.257705 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:53:31.257724 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:53:31.257774 | orchestrator | 2026-01-05 02:53:31.257794 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-01-05 02:53:31.257811 | orchestrator | Monday 05 January 2026 02:53:09 +0000 (0:00:00.673) 0:01:33.788 ******** 2026-01-05 02:53:31.257830 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:53:31.257871 | orchestrator | changed: [testbed-node-1] 2026-01-05 02:53:31.257890 | orchestrator | changed: [testbed-node-2] 2026-01-05 02:53:31.257906 | orchestrator | 2026-01-05 02:53:31.257923 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-01-05 02:53:31.257941 | orchestrator | Monday 05 January 2026 02:53:10 +0000 (0:00:00.946) 0:01:34.734 ******** 2026-01-05 02:53:31.257980 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:53:31.257999 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:53:31.258091 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:53:31.258123 | orchestrator | 2026-01-05 02:53:31.258141 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-01-05 02:53:31.258160 | orchestrator | Monday 05 January 2026 02:53:11 +0000 (0:00:00.730) 0:01:35.465 ******** 2026-01-05 02:53:31.258178 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:53:31.258196 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:53:31.258213 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:53:31.258232 | orchestrator | 2026-01-05 02:53:31.258250 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-01-05 02:53:31.258268 | orchestrator | Monday 05 January 2026 02:53:11 +0000 (0:00:00.324) 0:01:35.790 ******** 2026-01-05 02:53:31.258288 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:53:31.258306 | orchestrator | changed: [testbed-node-1] 2026-01-05 02:53:31.258324 | orchestrator | changed: [testbed-node-2] 2026-01-05 02:53:31.258340 | orchestrator | 2026-01-05 02:53:31.258358 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-01-05 02:53:31.258378 | orchestrator | Monday 05 January 2026 02:53:12 +0000 (0:00:00.749) 0:01:36.540 ******** 2026-01-05 02:53:31.258396 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:53:31.258414 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:53:31.258432 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:53:31.258451 | orchestrator | 2026-01-05 02:53:31.258470 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-01-05 02:53:31.258487 | orchestrator | Monday 05 January 2026 02:53:13 +0000 (0:00:01.024) 0:01:37.564 ******** 2026-01-05 02:53:31.258507 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:53:31.258525 | orchestrator | changed: [testbed-node-1] 2026-01-05 02:53:31.258544 | orchestrator | changed: [testbed-node-2] 2026-01-05 02:53:31.258563 | orchestrator | 2026-01-05 02:53:31.258584 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-01-05 02:53:31.258603 | orchestrator | Monday 05 January 2026 02:53:14 +0000 (0:00:00.965) 0:01:38.530 ******** 2026-01-05 02:53:31.258684 | orchestrator | changed: [testbed-node-1] 2026-01-05 02:53:31.258709 | orchestrator | changed: [testbed-node-2] 2026-01-05 02:53:31.258726 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:53:31.258743 | orchestrator | 2026-01-05 02:53:31.258761 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-01-05 02:53:31.258780 | orchestrator | Monday 05 January 2026 02:53:15 +0000 (0:00:00.927) 0:01:39.458 ******** 2026-01-05 02:53:31.258798 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:53:31.258817 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:53:31.258836 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:53:31.258855 | orchestrator | 2026-01-05 02:53:31.258873 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-01-05 02:53:31.258892 | orchestrator | Monday 05 January 2026 02:53:15 +0000 (0:00:00.323) 0:01:39.781 ******** 2026-01-05 02:53:31.258903 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:53:31.258914 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:53:31.258924 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:53:31.258935 | orchestrator | 2026-01-05 02:53:31.258946 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-01-05 02:53:31.258973 | orchestrator | Monday 05 January 2026 02:53:16 +0000 (0:00:00.567) 0:01:40.349 ******** 2026-01-05 02:53:31.258985 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:53:31.258995 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:53:31.259006 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:53:31.259017 | orchestrator | 2026-01-05 02:53:31.259028 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-01-05 02:53:31.259038 | orchestrator | Monday 05 January 2026 02:53:17 +0000 (0:00:00.791) 0:01:41.141 ******** 2026-01-05 02:53:31.259049 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:53:31.259060 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:53:31.259070 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:53:31.259081 | orchestrator | 2026-01-05 02:53:31.259093 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-01-05 02:53:31.259107 | orchestrator | Monday 05 January 2026 02:53:18 +0000 (0:00:00.719) 0:01:41.861 ******** 2026-01-05 02:53:31.259118 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-05 02:53:31.259129 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-05 02:53:31.259140 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-05 02:53:31.259151 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-05 02:53:31.259161 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-05 02:53:31.259172 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-05 02:53:31.259183 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-05 02:53:31.259221 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-05 02:53:31.259233 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-05 02:53:31.259243 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-05 02:53:31.259254 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-01-05 02:53:31.259265 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-05 02:53:31.259276 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-05 02:53:31.259288 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-05 02:53:31.259298 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-01-05 02:53:31.259309 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-05 02:53:31.259320 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-05 02:53:31.259331 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-05 02:53:31.259342 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-05 02:53:31.259352 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-05 02:53:31.259363 | orchestrator | 2026-01-05 02:53:31.259374 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-01-05 02:53:31.259385 | orchestrator | 2026-01-05 02:53:31.259396 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-01-05 02:53:31.259407 | orchestrator | Monday 05 January 2026 02:53:22 +0000 (0:00:04.690) 0:01:46.551 ******** 2026-01-05 02:53:31.259417 | orchestrator | ok: [testbed-node-3] 2026-01-05 02:53:31.259436 | orchestrator | ok: [testbed-node-4] 2026-01-05 02:53:31.259447 | orchestrator | ok: [testbed-node-5] 2026-01-05 02:53:31.259458 | orchestrator | 2026-01-05 02:53:31.259481 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-01-05 02:53:31.259493 | orchestrator | Monday 05 January 2026 02:53:23 +0000 (0:00:00.408) 0:01:46.960 ******** 2026-01-05 02:53:31.259504 | orchestrator | ok: [testbed-node-3] 2026-01-05 02:53:31.259515 | orchestrator | ok: [testbed-node-4] 2026-01-05 02:53:31.259526 | orchestrator | ok: [testbed-node-5] 2026-01-05 02:53:31.259536 | orchestrator | 2026-01-05 02:53:31.259547 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-01-05 02:53:31.259571 | orchestrator | Monday 05 January 2026 02:53:23 +0000 (0:00:00.788) 0:01:47.749 ******** 2026-01-05 02:53:31.259582 | orchestrator | ok: [testbed-node-3] 2026-01-05 02:53:31.259593 | orchestrator | ok: [testbed-node-4] 2026-01-05 02:53:31.259603 | orchestrator | ok: [testbed-node-5] 2026-01-05 02:53:31.259614 | orchestrator | 2026-01-05 02:53:31.259671 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-01-05 02:53:31.259692 | orchestrator | Monday 05 January 2026 02:53:24 +0000 (0:00:00.744) 0:01:48.493 ******** 2026-01-05 02:53:31.259711 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 02:53:31.259730 | orchestrator | 2026-01-05 02:53:31.259749 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-01-05 02:53:31.259760 | orchestrator | Monday 05 January 2026 02:53:25 +0000 (0:00:01.056) 0:01:49.550 ******** 2026-01-05 02:53:31.259771 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:53:31.259782 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:53:31.259793 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:53:31.259803 | orchestrator | 2026-01-05 02:53:31.259814 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-01-05 02:53:31.259825 | orchestrator | Monday 05 January 2026 02:53:26 +0000 (0:00:00.364) 0:01:49.914 ******** 2026-01-05 02:53:31.259836 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:53:31.259846 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:53:31.259857 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:53:31.259868 | orchestrator | 2026-01-05 02:53:31.259879 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-01-05 02:53:31.259889 | orchestrator | Monday 05 January 2026 02:53:26 +0000 (0:00:00.788) 0:01:50.703 ******** 2026-01-05 02:53:31.259900 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:53:31.259911 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:53:31.259921 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:53:31.259932 | orchestrator | 2026-01-05 02:53:31.259943 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-01-05 02:53:31.259953 | orchestrator | Monday 05 January 2026 02:53:27 +0000 (0:00:00.428) 0:01:51.132 ******** 2026-01-05 02:53:31.259964 | orchestrator | ok: [testbed-node-3] 2026-01-05 02:53:31.259975 | orchestrator | ok: [testbed-node-4] 2026-01-05 02:53:31.259985 | orchestrator | ok: [testbed-node-5] 2026-01-05 02:53:31.259996 | orchestrator | 2026-01-05 02:53:31.260007 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-01-05 02:53:31.260017 | orchestrator | Monday 05 January 2026 02:53:28 +0000 (0:00:00.789) 0:01:51.921 ******** 2026-01-05 02:53:31.260028 | orchestrator | ok: [testbed-node-3] 2026-01-05 02:53:31.260039 | orchestrator | ok: [testbed-node-4] 2026-01-05 02:53:31.260049 | orchestrator | ok: [testbed-node-5] 2026-01-05 02:53:31.260060 | orchestrator | 2026-01-05 02:53:31.260070 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-01-05 02:53:31.260084 | orchestrator | Monday 05 January 2026 02:53:29 +0000 (0:00:01.670) 0:01:53.592 ******** 2026-01-05 02:53:31.260102 | orchestrator | ok: [testbed-node-3] 2026-01-05 02:53:31.260159 | orchestrator | ok: [testbed-node-4] 2026-01-05 02:53:31.260179 | orchestrator | ok: [testbed-node-5] 2026-01-05 02:53:31.260196 | orchestrator | 2026-01-05 02:53:31.260227 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-01-05 02:53:31.260257 | orchestrator | Monday 05 January 2026 02:53:31 +0000 (0:00:01.435) 0:01:55.028 ******** 2026-01-05 02:54:13.678435 | orchestrator | changed: [testbed-node-4] 2026-01-05 02:54:13.678522 | orchestrator | changed: [testbed-node-3] 2026-01-05 02:54:13.678529 | orchestrator | changed: [testbed-node-5] 2026-01-05 02:54:13.678534 | orchestrator | 2026-01-05 02:54:13.678539 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-01-05 02:54:13.678546 | orchestrator | 2026-01-05 02:54:13.678553 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-01-05 02:54:13.678560 | orchestrator | Monday 05 January 2026 02:53:38 +0000 (0:00:07.690) 0:02:02.719 ******** 2026-01-05 02:54:13.678567 | orchestrator | ok: [testbed-manager] 2026-01-05 02:54:13.678573 | orchestrator | 2026-01-05 02:54:13.678577 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-01-05 02:54:13.678582 | orchestrator | Monday 05 January 2026 02:53:40 +0000 (0:00:01.291) 0:02:04.010 ******** 2026-01-05 02:54:13.678599 | orchestrator | ok: [testbed-manager] 2026-01-05 02:54:13.678603 | orchestrator | 2026-01-05 02:54:13.678607 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-05 02:54:13.678611 | orchestrator | Monday 05 January 2026 02:53:40 +0000 (0:00:00.499) 0:02:04.510 ******** 2026-01-05 02:54:13.678615 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-05 02:54:13.678619 | orchestrator | 2026-01-05 02:54:13.678623 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-05 02:54:13.678627 | orchestrator | Monday 05 January 2026 02:53:41 +0000 (0:00:00.660) 0:02:05.171 ******** 2026-01-05 02:54:13.678631 | orchestrator | changed: [testbed-manager] 2026-01-05 02:54:13.678634 | orchestrator | 2026-01-05 02:54:13.678640 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-01-05 02:54:13.678646 | orchestrator | Monday 05 January 2026 02:53:42 +0000 (0:00:01.030) 0:02:06.202 ******** 2026-01-05 02:54:13.678652 | orchestrator | changed: [testbed-manager] 2026-01-05 02:54:13.678702 | orchestrator | 2026-01-05 02:54:13.678709 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-01-05 02:54:13.678716 | orchestrator | Monday 05 January 2026 02:53:43 +0000 (0:00:00.696) 0:02:06.899 ******** 2026-01-05 02:54:13.678722 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-05 02:54:13.678729 | orchestrator | 2026-01-05 02:54:13.678736 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-01-05 02:54:13.678743 | orchestrator | Monday 05 January 2026 02:53:45 +0000 (0:00:02.435) 0:02:09.334 ******** 2026-01-05 02:54:13.678753 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-05 02:54:13.678761 | orchestrator | 2026-01-05 02:54:13.678766 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-01-05 02:54:13.678772 | orchestrator | Monday 05 January 2026 02:53:46 +0000 (0:00:00.913) 0:02:10.248 ******** 2026-01-05 02:54:13.678778 | orchestrator | ok: [testbed-manager] 2026-01-05 02:54:13.678784 | orchestrator | 2026-01-05 02:54:13.678791 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-01-05 02:54:13.678798 | orchestrator | Monday 05 January 2026 02:53:47 +0000 (0:00:00.718) 0:02:10.967 ******** 2026-01-05 02:54:13.678804 | orchestrator | ok: [testbed-manager] 2026-01-05 02:54:13.678811 | orchestrator | 2026-01-05 02:54:13.678817 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-01-05 02:54:13.678821 | orchestrator | 2026-01-05 02:54:13.678825 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-01-05 02:54:13.678829 | orchestrator | Monday 05 January 2026 02:53:48 +0000 (0:00:00.909) 0:02:11.876 ******** 2026-01-05 02:54:13.678833 | orchestrator | ok: [testbed-manager] 2026-01-05 02:54:13.678837 | orchestrator | 2026-01-05 02:54:13.678840 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-01-05 02:54:13.678844 | orchestrator | Monday 05 January 2026 02:53:48 +0000 (0:00:00.155) 0:02:12.032 ******** 2026-01-05 02:54:13.678870 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-01-05 02:54:13.678878 | orchestrator | 2026-01-05 02:54:13.678883 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-01-05 02:54:13.678889 | orchestrator | Monday 05 January 2026 02:53:48 +0000 (0:00:00.625) 0:02:12.657 ******** 2026-01-05 02:54:13.678895 | orchestrator | ok: [testbed-manager] 2026-01-05 02:54:13.678901 | orchestrator | 2026-01-05 02:54:13.678906 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-01-05 02:54:13.678912 | orchestrator | Monday 05 January 2026 02:53:49 +0000 (0:00:00.862) 0:02:13.520 ******** 2026-01-05 02:54:13.678917 | orchestrator | ok: [testbed-manager] 2026-01-05 02:54:13.678923 | orchestrator | 2026-01-05 02:54:13.678929 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-01-05 02:54:13.678983 | orchestrator | Monday 05 January 2026 02:53:51 +0000 (0:00:01.727) 0:02:15.248 ******** 2026-01-05 02:54:13.678991 | orchestrator | ok: [testbed-manager] 2026-01-05 02:54:13.678997 | orchestrator | 2026-01-05 02:54:13.679003 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-01-05 02:54:13.679010 | orchestrator | Monday 05 January 2026 02:53:51 +0000 (0:00:00.376) 0:02:15.625 ******** 2026-01-05 02:54:13.679017 | orchestrator | ok: [testbed-manager] 2026-01-05 02:54:13.679023 | orchestrator | 2026-01-05 02:54:13.679031 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-01-05 02:54:13.679038 | orchestrator | Monday 05 January 2026 02:53:52 +0000 (0:00:00.587) 0:02:16.212 ******** 2026-01-05 02:54:13.679044 | orchestrator | ok: [testbed-manager] 2026-01-05 02:54:13.679050 | orchestrator | 2026-01-05 02:54:13.679056 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-01-05 02:54:13.679062 | orchestrator | Monday 05 January 2026 02:53:53 +0000 (0:00:00.693) 0:02:16.906 ******** 2026-01-05 02:54:13.679068 | orchestrator | ok: [testbed-manager] 2026-01-05 02:54:13.679074 | orchestrator | 2026-01-05 02:54:13.679081 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-01-05 02:54:13.679087 | orchestrator | Monday 05 January 2026 02:53:54 +0000 (0:00:01.285) 0:02:18.192 ******** 2026-01-05 02:54:13.679094 | orchestrator | ok: [testbed-manager] 2026-01-05 02:54:13.679100 | orchestrator | 2026-01-05 02:54:13.679107 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-01-05 02:54:13.679113 | orchestrator | 2026-01-05 02:54:13.679138 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-01-05 02:54:13.679145 | orchestrator | Monday 05 January 2026 02:53:55 +0000 (0:00:01.083) 0:02:19.275 ******** 2026-01-05 02:54:13.679151 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:54:13.679158 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:54:13.679164 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:54:13.679169 | orchestrator | 2026-01-05 02:54:13.679175 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-01-05 02:54:13.679181 | orchestrator | Monday 05 January 2026 02:53:55 +0000 (0:00:00.382) 0:02:19.658 ******** 2026-01-05 02:54:13.679188 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:54:13.679194 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:54:13.679200 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:54:13.679207 | orchestrator | 2026-01-05 02:54:13.679213 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-01-05 02:54:13.679219 | orchestrator | Monday 05 January 2026 02:53:56 +0000 (0:00:00.583) 0:02:20.241 ******** 2026-01-05 02:54:13.679226 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 02:54:13.679232 | orchestrator | 2026-01-05 02:54:13.679239 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-01-05 02:54:13.679245 | orchestrator | Monday 05 January 2026 02:53:57 +0000 (0:00:01.034) 0:02:21.276 ******** 2026-01-05 02:54:13.679252 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-05 02:54:13.679269 | orchestrator | 2026-01-05 02:54:13.679275 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-01-05 02:54:13.679281 | orchestrator | Monday 05 January 2026 02:53:58 +0000 (0:00:00.859) 0:02:22.136 ******** 2026-01-05 02:54:13.679287 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 02:54:13.679293 | orchestrator | 2026-01-05 02:54:13.679300 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-01-05 02:54:13.679306 | orchestrator | Monday 05 January 2026 02:53:59 +0000 (0:00:00.916) 0:02:23.053 ******** 2026-01-05 02:54:13.679348 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:54:13.679355 | orchestrator | 2026-01-05 02:54:13.679362 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-01-05 02:54:13.679368 | orchestrator | Monday 05 January 2026 02:53:59 +0000 (0:00:00.181) 0:02:23.234 ******** 2026-01-05 02:54:13.679374 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 02:54:13.679381 | orchestrator | 2026-01-05 02:54:13.679387 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-01-05 02:54:13.679393 | orchestrator | Monday 05 January 2026 02:54:01 +0000 (0:00:01.584) 0:02:24.818 ******** 2026-01-05 02:54:13.679400 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 02:54:13.679406 | orchestrator | 2026-01-05 02:54:13.679412 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-01-05 02:54:13.679418 | orchestrator | Monday 05 January 2026 02:54:02 +0000 (0:00:01.418) 0:02:26.237 ******** 2026-01-05 02:54:13.679424 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 02:54:13.679430 | orchestrator | 2026-01-05 02:54:13.679436 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-01-05 02:54:13.679442 | orchestrator | Monday 05 January 2026 02:54:02 +0000 (0:00:00.178) 0:02:26.416 ******** 2026-01-05 02:54:13.679448 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 02:54:13.679454 | orchestrator | 2026-01-05 02:54:13.679460 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-01-05 02:54:13.679466 | orchestrator | Monday 05 January 2026 02:54:02 +0000 (0:00:00.163) 0:02:26.579 ******** 2026-01-05 02:54:13.679481 | orchestrator | ok: [testbed-node-0 -> localhost] => { 2026-01-05 02:54:13.679487 | orchestrator |  "msg": "Installed Cilium version: 1.18.2, Target Cilium version: v1.18.2, Update needed: False\n" 2026-01-05 02:54:13.679495 | orchestrator | } 2026-01-05 02:54:13.679502 | orchestrator | 2026-01-05 02:54:13.679508 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-01-05 02:54:13.679514 | orchestrator | Monday 05 January 2026 02:54:02 +0000 (0:00:00.184) 0:02:26.764 ******** 2026-01-05 02:54:13.679520 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:54:13.679526 | orchestrator | 2026-01-05 02:54:13.679532 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-01-05 02:54:13.679538 | orchestrator | Monday 05 January 2026 02:54:03 +0000 (0:00:00.153) 0:02:26.918 ******** 2026-01-05 02:54:13.679544 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-01-05 02:54:13.679550 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-01-05 02:54:13.679557 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-01-05 02:54:13.679563 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-01-05 02:54:13.679569 | orchestrator | 2026-01-05 02:54:13.679575 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-01-05 02:54:13.679581 | orchestrator | Monday 05 January 2026 02:54:07 +0000 (0:00:04.433) 0:02:31.351 ******** 2026-01-05 02:54:13.679587 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 02:54:13.679593 | orchestrator | 2026-01-05 02:54:13.679599 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-01-05 02:54:13.679605 | orchestrator | Monday 05 January 2026 02:54:08 +0000 (0:00:01.269) 0:02:32.621 ******** 2026-01-05 02:54:13.679611 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-05 02:54:13.679622 | orchestrator | 2026-01-05 02:54:13.679629 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-01-05 02:54:13.679634 | orchestrator | Monday 05 January 2026 02:54:10 +0000 (0:00:01.421) 0:02:34.042 ******** 2026-01-05 02:54:13.679641 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-05 02:54:13.679646 | orchestrator | 2026-01-05 02:54:13.679652 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-01-05 02:54:13.679674 | orchestrator | Monday 05 January 2026 02:54:13 +0000 (0:00:03.292) 0:02:37.335 ******** 2026-01-05 02:54:13.679681 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:54:13.679686 | orchestrator | 2026-01-05 02:54:13.679701 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-01-05 02:54:42.664342 | orchestrator | Monday 05 January 2026 02:54:13 +0000 (0:00:00.124) 0:02:37.460 ******** 2026-01-05 02:54:42.664478 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-01-05 02:54:42.664491 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-01-05 02:54:42.664499 | orchestrator | 2026-01-05 02:54:42.664507 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-01-05 02:54:42.664514 | orchestrator | Monday 05 January 2026 02:54:15 +0000 (0:00:02.136) 0:02:39.596 ******** 2026-01-05 02:54:42.664521 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:54:42.664559 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:54:42.664568 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:54:42.664575 | orchestrator | 2026-01-05 02:54:42.664582 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-01-05 02:54:42.664589 | orchestrator | Monday 05 January 2026 02:54:16 +0000 (0:00:00.249) 0:02:39.845 ******** 2026-01-05 02:54:42.664596 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:54:42.664604 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:54:42.664613 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:54:42.664625 | orchestrator | 2026-01-05 02:54:42.664635 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-01-05 02:54:42.664645 | orchestrator | 2026-01-05 02:54:42.664655 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-01-05 02:54:42.664666 | orchestrator | Monday 05 January 2026 02:54:17 +0000 (0:00:01.718) 0:02:41.564 ******** 2026-01-05 02:54:42.664698 | orchestrator | ok: [testbed-manager] 2026-01-05 02:54:42.664711 | orchestrator | 2026-01-05 02:54:42.664722 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-01-05 02:54:42.664732 | orchestrator | Monday 05 January 2026 02:54:18 +0000 (0:00:00.279) 0:02:41.844 ******** 2026-01-05 02:54:42.664743 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-01-05 02:54:42.664754 | orchestrator | 2026-01-05 02:54:42.664766 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-01-05 02:54:42.664777 | orchestrator | Monday 05 January 2026 02:54:18 +0000 (0:00:00.540) 0:02:42.385 ******** 2026-01-05 02:54:42.664788 | orchestrator | ok: [testbed-manager] 2026-01-05 02:54:42.664799 | orchestrator | 2026-01-05 02:54:42.664811 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-01-05 02:54:42.664823 | orchestrator | 2026-01-05 02:54:42.664834 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-01-05 02:54:42.664844 | orchestrator | Monday 05 January 2026 02:54:22 +0000 (0:00:04.036) 0:02:46.421 ******** 2026-01-05 02:54:42.664853 | orchestrator | ok: [testbed-node-3] 2026-01-05 02:54:42.664861 | orchestrator | ok: [testbed-node-4] 2026-01-05 02:54:42.664869 | orchestrator | ok: [testbed-node-5] 2026-01-05 02:54:42.664877 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:54:42.664885 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:54:42.664893 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:54:42.664901 | orchestrator | 2026-01-05 02:54:42.664910 | orchestrator | TASK [Manage labels] *********************************************************** 2026-01-05 02:54:42.664938 | orchestrator | Monday 05 January 2026 02:54:23 +0000 (0:00:00.695) 0:02:47.117 ******** 2026-01-05 02:54:42.664946 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-05 02:54:42.664954 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-05 02:54:42.664962 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-05 02:54:42.664969 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-05 02:54:42.664976 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-05 02:54:42.664982 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-05 02:54:42.664989 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-05 02:54:42.664995 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-05 02:54:42.665002 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-05 02:54:42.665009 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-05 02:54:42.665015 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-05 02:54:42.665022 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-05 02:54:42.665029 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-05 02:54:42.665035 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-05 02:54:42.665042 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-05 02:54:42.665048 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-05 02:54:42.665055 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-05 02:54:42.665061 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-05 02:54:42.665068 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-05 02:54:42.665075 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-05 02:54:42.665086 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-05 02:54:42.665120 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-05 02:54:42.665134 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-05 02:54:42.665145 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-05 02:54:42.665155 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-05 02:54:42.665166 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-05 02:54:42.665176 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-05 02:54:42.665186 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-05 02:54:42.665196 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-05 02:54:42.665225 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-05 02:54:42.665237 | orchestrator | 2026-01-05 02:54:42.665249 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-01-05 02:54:42.665261 | orchestrator | Monday 05 January 2026 02:54:35 +0000 (0:00:11.967) 0:02:59.084 ******** 2026-01-05 02:54:42.665272 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:54:42.665284 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:54:42.665300 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:54:42.665306 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:54:42.665313 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:54:42.665320 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:54:42.665326 | orchestrator | 2026-01-05 02:54:42.665333 | orchestrator | TASK [Manage taints] *********************************************************** 2026-01-05 02:54:42.665340 | orchestrator | Monday 05 January 2026 02:54:36 +0000 (0:00:00.881) 0:02:59.966 ******** 2026-01-05 02:54:42.665346 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:54:42.665353 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:54:42.665360 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:54:42.665366 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:54:42.665373 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:54:42.665379 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:54:42.665386 | orchestrator | 2026-01-05 02:54:42.665392 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 02:54:42.665399 | orchestrator | testbed-manager : ok=21  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 02:54:42.665409 | orchestrator | testbed-node-0 : ok=53  changed=14  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-05 02:54:42.665416 | orchestrator | testbed-node-1 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-05 02:54:42.665423 | orchestrator | testbed-node-2 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-05 02:54:42.665429 | orchestrator | testbed-node-3 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-05 02:54:42.665437 | orchestrator | testbed-node-4 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-05 02:54:42.665448 | orchestrator | testbed-node-5 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-05 02:54:42.665459 | orchestrator | 2026-01-05 02:54:42.665468 | orchestrator | 2026-01-05 02:54:42.665478 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 02:54:42.665489 | orchestrator | Monday 05 January 2026 02:54:42 +0000 (0:00:06.448) 0:03:06.415 ******** 2026-01-05 02:54:42.665499 | orchestrator | =============================================================================== 2026-01-05 02:54:42.665509 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 25.82s 2026-01-05 02:54:42.665520 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 22.23s 2026-01-05 02:54:42.665533 | orchestrator | Manage labels ---------------------------------------------------------- 11.97s 2026-01-05 02:54:42.665543 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 7.69s 2026-01-05 02:54:42.665554 | orchestrator | Manage taints ----------------------------------------------------------- 6.45s 2026-01-05 02:54:42.665564 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 4.69s 2026-01-05 02:54:42.665571 | orchestrator | k3s_server_post : Wait for Cilium resources ----------------------------- 4.43s 2026-01-05 02:54:42.665577 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 4.04s 2026-01-05 02:54:42.665584 | orchestrator | k3s_server_post : Apply BGP manifests ----------------------------------- 3.29s 2026-01-05 02:54:42.665590 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.44s 2026-01-05 02:54:42.665597 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 2.44s 2026-01-05 02:54:42.665604 | orchestrator | k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites --- 2.32s 2026-01-05 02:54:42.665629 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.25s 2026-01-05 02:54:43.230091 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.14s 2026-01-05 02:54:43.230181 | orchestrator | k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured --- 1.95s 2026-01-05 02:54:43.230189 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 1.80s 2026-01-05 02:54:43.230194 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 1.73s 2026-01-05 02:54:43.230198 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.73s 2026-01-05 02:54:43.230205 | orchestrator | k3s_server_post : Remove tmp directory used for manifests --------------- 1.72s 2026-01-05 02:54:43.230210 | orchestrator | k3s_agent : Create custom resolv.conf for k3s --------------------------- 1.67s 2026-01-05 02:54:43.582983 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-01-05 02:54:43.583091 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/200-infrastructure.sh 2026-01-05 02:54:43.588725 | orchestrator | + set -e 2026-01-05 02:54:43.588820 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-05 02:54:43.588833 | orchestrator | ++ export INTERACTIVE=false 2026-01-05 02:54:43.588842 | orchestrator | ++ INTERACTIVE=false 2026-01-05 02:54:43.588848 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-05 02:54:43.588852 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-05 02:54:43.588857 | orchestrator | + osism apply openstackclient 2026-01-05 02:54:56.425134 | orchestrator | 2026-01-05 02:54:56 | INFO  | Task e3bfdf48-6606-4b89-bd67-b7e8d2667da7 (openstackclient) was prepared for execution. 2026-01-05 02:54:56.425257 | orchestrator | 2026-01-05 02:54:56 | INFO  | It takes a moment until task e3bfdf48-6606-4b89-bd67-b7e8d2667da7 (openstackclient) has been started and output is visible here. 2026-01-05 02:55:21.757162 | orchestrator | 2026-01-05 02:55:21.757258 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-01-05 02:55:21.757277 | orchestrator | 2026-01-05 02:55:21.757296 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-01-05 02:55:21.757385 | orchestrator | Monday 05 January 2026 02:55:04 +0000 (0:00:03.319) 0:00:03.319 ******** 2026-01-05 02:55:21.757401 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-01-05 02:55:21.757414 | orchestrator | 2026-01-05 02:55:21.757427 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-01-05 02:55:21.757440 | orchestrator | Monday 05 January 2026 02:55:07 +0000 (0:00:02.526) 0:00:05.846 ******** 2026-01-05 02:55:21.757453 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-01-05 02:55:21.757463 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient/data) 2026-01-05 02:55:21.757471 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-01-05 02:55:21.757478 | orchestrator | 2026-01-05 02:55:21.757490 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-01-05 02:55:21.757501 | orchestrator | Monday 05 January 2026 02:55:10 +0000 (0:00:03.765) 0:00:09.611 ******** 2026-01-05 02:55:21.757512 | orchestrator | ok: [testbed-manager] 2026-01-05 02:55:21.757525 | orchestrator | 2026-01-05 02:55:21.757536 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-01-05 02:55:21.757548 | orchestrator | Monday 05 January 2026 02:55:14 +0000 (0:00:03.647) 0:00:13.259 ******** 2026-01-05 02:55:21.757560 | orchestrator | ok: [testbed-manager] 2026-01-05 02:55:21.757573 | orchestrator | 2026-01-05 02:55:21.757585 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-01-05 02:55:21.757597 | orchestrator | Monday 05 January 2026 02:55:16 +0000 (0:00:02.126) 0:00:15.386 ******** 2026-01-05 02:55:21.757608 | orchestrator | ok: [testbed-manager] 2026-01-05 02:55:21.757621 | orchestrator | 2026-01-05 02:55:21.757634 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-01-05 02:55:21.757669 | orchestrator | Monday 05 January 2026 02:55:18 +0000 (0:00:02.130) 0:00:17.517 ******** 2026-01-05 02:55:21.757677 | orchestrator | ok: [testbed-manager] 2026-01-05 02:55:21.757685 | orchestrator | 2026-01-05 02:55:21.757692 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 02:55:21.757700 | orchestrator | testbed-manager : ok=6  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 02:55:21.757741 | orchestrator | 2026-01-05 02:55:21.757750 | orchestrator | 2026-01-05 02:55:21.757759 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 02:55:21.757767 | orchestrator | Monday 05 January 2026 02:55:21 +0000 (0:00:02.246) 0:00:19.763 ******** 2026-01-05 02:55:21.757776 | orchestrator | =============================================================================== 2026-01-05 02:55:21.757784 | orchestrator | osism.services.openstackclient : Create required directories ------------ 3.77s 2026-01-05 02:55:21.757796 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 3.65s 2026-01-05 02:55:21.757812 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 2.53s 2026-01-05 02:55:21.757830 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 2.25s 2026-01-05 02:55:21.757843 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 2.13s 2026-01-05 02:55:21.757856 | orchestrator | osism.services.openstackclient : Manage openstackclient service --------- 2.13s 2026-01-05 02:55:22.314242 | orchestrator | + osism apply -a upgrade common 2026-01-05 02:55:24.672472 | orchestrator | 2026-01-05 02:55:24 | INFO  | Task 3e775267-13ba-4067-86fd-1b5170469d6b (common) was prepared for execution. 2026-01-05 02:55:24.672527 | orchestrator | 2026-01-05 02:55:24 | INFO  | It takes a moment until task 3e775267-13ba-4067-86fd-1b5170469d6b (common) has been started and output is visible here. 2026-01-05 02:56:43.366079 | orchestrator | 2026-01-05 02:56:43.366202 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-01-05 02:56:43.366213 | orchestrator | 2026-01-05 02:56:43.366220 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-01-05 02:56:43.366228 | orchestrator | Monday 05 January 2026 02:56:27 +0000 (0:00:03.794) 0:00:03.794 ******** 2026-01-05 02:56:43.366235 | orchestrator | included: /ansible/roles/common/tasks/upgrade.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 02:56:43.366244 | orchestrator | 2026-01-05 02:56:43.366268 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-01-05 02:56:43.366275 | orchestrator | Monday 05 January 2026 02:56:33 +0000 (0:00:06.021) 0:00:09.815 ******** 2026-01-05 02:56:43.366282 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-05 02:56:43.366289 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-05 02:56:43.366296 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-05 02:56:43.366304 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-05 02:56:43.366311 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-05 02:56:43.366317 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-05 02:56:43.366324 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-05 02:56:43.366330 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-05 02:56:43.366337 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-05 02:56:43.366343 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-05 02:56:43.366349 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-05 02:56:43.366356 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-05 02:56:43.366390 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-05 02:56:43.366396 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-05 02:56:43.366401 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-05 02:56:43.366407 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-05 02:56:43.366412 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-05 02:56:43.366418 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-05 02:56:43.366426 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-05 02:56:43.366432 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-05 02:56:43.366438 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-05 02:56:43.366444 | orchestrator | 2026-01-05 02:56:43.366450 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-01-05 02:56:43.366456 | orchestrator | Monday 05 January 2026 02:56:38 +0000 (0:00:04.286) 0:00:14.101 ******** 2026-01-05 02:56:43.366463 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 02:56:43.366471 | orchestrator | 2026-01-05 02:56:43.366477 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-01-05 02:56:43.366484 | orchestrator | Monday 05 January 2026 02:56:40 +0000 (0:00:02.613) 0:00:16.715 ******** 2026-01-05 02:56:43.366498 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 02:56:43.366510 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 02:56:43.366541 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 02:56:43.366549 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 02:56:43.366556 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 02:56:43.366570 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 02:56:43.366576 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:56:43.366584 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 02:56:43.368089 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:56:43.368170 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:56:46.069538 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:56:46.069647 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:56:46.069662 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:56:46.069667 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:56:46.069671 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:56:46.069675 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:56:46.069680 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:56:46.069684 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:56:46.069702 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:56:46.069710 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:56:46.069714 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:56:46.069718 | orchestrator | 2026-01-05 02:56:46.069723 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-01-05 02:56:46.069728 | orchestrator | Monday 05 January 2026 02:56:45 +0000 (0:00:04.829) 0:00:21.545 ******** 2026-01-05 02:56:46.069737 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 02:56:46.069742 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 02:56:46.069746 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 02:56:46.069752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 02:56:46.069858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 02:56:48.478202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 02:56:48.478330 | orchestrator | skipping: [testbed-manager] 2026-01-05 02:56:48.478346 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:56:48.478357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 02:56:48.478370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 02:56:48.478380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 02:56:48.478390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 02:56:48.478399 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:56:48.478409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 02:56:48.478418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 02:56:48.478486 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 02:56:48.478506 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 02:56:48.478516 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 02:56:48.478530 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 02:56:48.478540 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 02:56:48.478549 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 02:56:48.478558 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:56:48.478567 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:56:48.478576 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:56:48.478585 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 02:56:48.478594 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 02:56:48.478617 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 02:56:51.558159 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:56:51.558259 | orchestrator | 2026-01-05 02:56:51.558273 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-01-05 02:56:51.558285 | orchestrator | Monday 05 January 2026 02:56:48 +0000 (0:00:02.779) 0:00:24.324 ******** 2026-01-05 02:56:51.558299 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 02:56:51.558332 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 02:56:51.558344 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 02:56:51.558355 | orchestrator | skipping: [testbed-manager] 2026-01-05 02:56:51.558365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 02:56:51.558375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 02:56:51.558385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 02:56:51.558418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 02:56:51.558445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 02:56:51.558455 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:56:51.558465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 02:56:51.558476 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:56:51.558490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 02:56:51.558502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 02:56:51.558513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 02:56:51.558522 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:56:51.558533 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 02:56:51.558551 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 02:56:51.558562 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 02:56:51.558592 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 02:56:58.729855 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:56:58.729946 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 02:56:58.729957 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 02:56:58.729963 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:56:58.729983 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-05 02:56:58.729991 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 02:56:58.730091 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 02:56:58.730097 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:56:58.730102 | orchestrator | 2026-01-05 02:56:58.730107 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-01-05 02:56:58.730112 | orchestrator | Monday 05 January 2026 02:56:51 +0000 (0:00:03.083) 0:00:27.407 ******** 2026-01-05 02:56:58.730116 | orchestrator | skipping: [testbed-manager] 2026-01-05 02:56:58.730120 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:56:58.730125 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:56:58.730129 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:56:58.730133 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:56:58.730137 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:56:58.730141 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:56:58.730145 | orchestrator | 2026-01-05 02:56:58.730149 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-01-05 02:56:58.730154 | orchestrator | Monday 05 January 2026 02:56:53 +0000 (0:00:02.326) 0:00:29.733 ******** 2026-01-05 02:56:58.730160 | orchestrator | skipping: [testbed-manager] 2026-01-05 02:56:58.730166 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:56:58.730172 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:56:58.730179 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:56:58.730185 | orchestrator | skipping: [testbed-node-3] 2026-01-05 02:56:58.730195 | orchestrator | skipping: [testbed-node-4] 2026-01-05 02:56:58.730203 | orchestrator | skipping: [testbed-node-5] 2026-01-05 02:56:58.730211 | orchestrator | 2026-01-05 02:56:58.730217 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-01-05 02:56:58.730223 | orchestrator | Monday 05 January 2026 02:56:55 +0000 (0:00:02.079) 0:00:31.812 ******** 2026-01-05 02:56:58.730252 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 02:56:58.730261 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 02:56:58.730273 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 02:56:58.730286 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 02:56:58.730293 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 02:56:58.730299 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 02:56:58.730307 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:56:58.730314 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 02:56:58.730330 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:57:13.305742 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:57:13.305884 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:57:13.305898 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:57:13.305906 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:57:13.305915 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:57:13.305921 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:57:13.305927 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:57:13.305951 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:57:13.305965 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:57:13.305980 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:57:13.305986 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:57:13.305994 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:57:13.306001 | orchestrator | 2026-01-05 02:57:13.306006 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-01-05 02:57:13.306049 | orchestrator | Monday 05 January 2026 02:57:00 +0000 (0:00:04.971) 0:00:36.784 ******** 2026-01-05 02:57:13.306060 | orchestrator | [WARNING]: Skipped 2026-01-05 02:57:13.306069 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-01-05 02:57:13.306078 | orchestrator | to this access issue: 2026-01-05 02:57:13.306085 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-01-05 02:57:13.306092 | orchestrator | directory 2026-01-05 02:57:13.306100 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-05 02:57:13.306107 | orchestrator | 2026-01-05 02:57:13.306112 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-01-05 02:57:13.306116 | orchestrator | Monday 05 January 2026 02:57:03 +0000 (0:00:02.689) 0:00:39.474 ******** 2026-01-05 02:57:13.306120 | orchestrator | [WARNING]: Skipped 2026-01-05 02:57:13.306124 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-01-05 02:57:13.306128 | orchestrator | to this access issue: 2026-01-05 02:57:13.306132 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-01-05 02:57:13.306136 | orchestrator | directory 2026-01-05 02:57:13.306140 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-05 02:57:13.306144 | orchestrator | 2026-01-05 02:57:13.306149 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-01-05 02:57:13.306153 | orchestrator | Monday 05 January 2026 02:57:05 +0000 (0:00:01.950) 0:00:41.425 ******** 2026-01-05 02:57:13.306157 | orchestrator | [WARNING]: Skipped 2026-01-05 02:57:13.306161 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-01-05 02:57:13.306165 | orchestrator | to this access issue: 2026-01-05 02:57:13.306169 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-01-05 02:57:13.306173 | orchestrator | directory 2026-01-05 02:57:13.306177 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-05 02:57:13.306181 | orchestrator | 2026-01-05 02:57:13.306185 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-01-05 02:57:13.306190 | orchestrator | Monday 05 January 2026 02:57:07 +0000 (0:00:01.880) 0:00:43.306 ******** 2026-01-05 02:57:13.306194 | orchestrator | [WARNING]: Skipped 2026-01-05 02:57:13.306198 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-01-05 02:57:13.306217 | orchestrator | to this access issue: 2026-01-05 02:57:13.306221 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-01-05 02:57:13.306225 | orchestrator | directory 2026-01-05 02:57:13.306230 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-05 02:57:13.306234 | orchestrator | 2026-01-05 02:57:13.306238 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-01-05 02:57:13.306242 | orchestrator | Monday 05 January 2026 02:57:09 +0000 (0:00:01.926) 0:00:45.232 ******** 2026-01-05 02:57:13.306246 | orchestrator | ok: [testbed-manager] 2026-01-05 02:57:13.306250 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:57:13.306254 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:57:13.306258 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:57:13.306263 | orchestrator | ok: [testbed-node-3] 2026-01-05 02:57:13.306267 | orchestrator | ok: [testbed-node-4] 2026-01-05 02:57:13.306276 | orchestrator | ok: [testbed-node-5] 2026-01-05 02:57:22.762309 | orchestrator | 2026-01-05 02:57:22.762421 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-01-05 02:57:22.762434 | orchestrator | Monday 05 January 2026 02:57:13 +0000 (0:00:03.911) 0:00:49.144 ******** 2026-01-05 02:57:22.762442 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-05 02:57:22.762450 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-05 02:57:22.762457 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-05 02:57:22.762464 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-05 02:57:22.762471 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-05 02:57:22.762478 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-05 02:57:22.762485 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-05 02:57:22.762492 | orchestrator | 2026-01-05 02:57:22.762499 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-01-05 02:57:22.762506 | orchestrator | Monday 05 January 2026 02:57:16 +0000 (0:00:03.135) 0:00:52.280 ******** 2026-01-05 02:57:22.762513 | orchestrator | ok: [testbed-manager] 2026-01-05 02:57:22.762521 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:57:22.762543 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:57:22.762551 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:57:22.762557 | orchestrator | ok: [testbed-node-3] 2026-01-05 02:57:22.762564 | orchestrator | ok: [testbed-node-4] 2026-01-05 02:57:22.762571 | orchestrator | ok: [testbed-node-5] 2026-01-05 02:57:22.762578 | orchestrator | 2026-01-05 02:57:22.762585 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-01-05 02:57:22.762592 | orchestrator | Monday 05 January 2026 02:57:19 +0000 (0:00:03.116) 0:00:55.396 ******** 2026-01-05 02:57:22.762601 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 02:57:22.762612 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 02:57:22.762652 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 02:57:22.762661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 02:57:22.762682 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 02:57:22.762694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 02:57:22.762711 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:57:22.762720 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 02:57:22.762727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 02:57:22.762740 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 02:57:22.762747 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 02:57:22.762755 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:57:22.762829 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:57:31.681188 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:57:31.681280 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 02:57:31.681289 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:57:31.681296 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 02:57:31.681316 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 02:57:31.681321 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 02:57:31.681326 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:57:31.681331 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:57:31.681336 | orchestrator | 2026-01-05 02:57:31.681341 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-01-05 02:57:31.681346 | orchestrator | Monday 05 January 2026 02:57:22 +0000 (0:00:03.213) 0:00:58.609 ******** 2026-01-05 02:57:31.681362 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-05 02:57:31.681368 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-05 02:57:31.681372 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-05 02:57:31.681380 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-05 02:57:31.681385 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-05 02:57:31.681389 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-05 02:57:31.681394 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-05 02:57:31.681398 | orchestrator | 2026-01-05 02:57:31.681402 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-01-05 02:57:31.681407 | orchestrator | Monday 05 January 2026 02:57:25 +0000 (0:00:02.994) 0:01:01.604 ******** 2026-01-05 02:57:31.681411 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-05 02:57:31.681415 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-05 02:57:31.681420 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-05 02:57:31.681424 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-05 02:57:31.681433 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-05 02:57:31.681438 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-05 02:57:31.681442 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-05 02:57:31.681446 | orchestrator | 2026-01-05 02:57:31.681451 | orchestrator | TASK [common : Check common containers] **************************************** 2026-01-05 02:57:31.681455 | orchestrator | Monday 05 January 2026 02:57:29 +0000 (0:00:03.621) 0:01:05.226 ******** 2026-01-05 02:57:31.681459 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 02:57:31.681465 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 02:57:31.681470 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 02:57:31.681475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 02:57:31.681482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 02:57:37.712295 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 02:57:37.712392 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:57:37.712420 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-05 02:57:37.712427 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:57:37.712445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:57:37.712451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:57:37.712457 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:57:37.712479 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:57:37.712487 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:57:37.712497 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:57:37.712503 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:57:37.712509 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:57:37.712514 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:57:37.712519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:57:37.712525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:57:37.712534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 02:58:59.480580 | orchestrator | 2026-01-05 02:58:59.480704 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-05 02:58:59.480732 | orchestrator | Monday 05 January 2026 02:57:34 +0000 (0:00:04.662) 0:01:09.889 ******** 2026-01-05 02:58:59.480760 | orchestrator | 2026-01-05 02:58:59.480767 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-05 02:58:59.480773 | orchestrator | Monday 05 January 2026 02:57:34 +0000 (0:00:00.463) 0:01:10.353 ******** 2026-01-05 02:58:59.480780 | orchestrator | 2026-01-05 02:58:59.480784 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-05 02:58:59.480788 | orchestrator | Monday 05 January 2026 02:57:34 +0000 (0:00:00.463) 0:01:10.816 ******** 2026-01-05 02:58:59.480792 | orchestrator | 2026-01-05 02:58:59.480796 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-05 02:58:59.480800 | orchestrator | Monday 05 January 2026 02:57:35 +0000 (0:00:00.462) 0:01:11.278 ******** 2026-01-05 02:58:59.480804 | orchestrator | 2026-01-05 02:58:59.480807 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-05 02:58:59.480811 | orchestrator | Monday 05 January 2026 02:57:35 +0000 (0:00:00.450) 0:01:11.729 ******** 2026-01-05 02:58:59.480833 | orchestrator | 2026-01-05 02:58:59.480838 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-05 02:58:59.480842 | orchestrator | Monday 05 January 2026 02:57:36 +0000 (0:00:00.478) 0:01:12.207 ******** 2026-01-05 02:58:59.480845 | orchestrator | 2026-01-05 02:58:59.480849 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-05 02:58:59.480853 | orchestrator | Monday 05 January 2026 02:57:36 +0000 (0:00:00.464) 0:01:12.671 ******** 2026-01-05 02:58:59.480857 | orchestrator | 2026-01-05 02:58:59.480861 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-01-05 02:58:59.480865 | orchestrator | Monday 05 January 2026 02:57:37 +0000 (0:00:00.873) 0:01:13.545 ******** 2026-01-05 02:58:59.480869 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:58:59.480874 | orchestrator | changed: [testbed-node-4] 2026-01-05 02:58:59.480878 | orchestrator | changed: [testbed-node-2] 2026-01-05 02:58:59.480881 | orchestrator | changed: [testbed-node-3] 2026-01-05 02:58:59.480885 | orchestrator | changed: [testbed-node-5] 2026-01-05 02:58:59.480889 | orchestrator | changed: [testbed-node-1] 2026-01-05 02:58:59.480893 | orchestrator | changed: [testbed-manager] 2026-01-05 02:58:59.480897 | orchestrator | 2026-01-05 02:58:59.480900 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-01-05 02:58:59.480905 | orchestrator | Monday 05 January 2026 02:58:10 +0000 (0:00:32.746) 0:01:46.291 ******** 2026-01-05 02:58:59.480909 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:58:59.480912 | orchestrator | changed: [testbed-node-4] 2026-01-05 02:58:59.480916 | orchestrator | changed: [testbed-node-3] 2026-01-05 02:58:59.480920 | orchestrator | changed: [testbed-node-1] 2026-01-05 02:58:59.480924 | orchestrator | changed: [testbed-node-5] 2026-01-05 02:58:59.480927 | orchestrator | changed: [testbed-manager] 2026-01-05 02:58:59.480931 | orchestrator | changed: [testbed-node-2] 2026-01-05 02:58:59.480935 | orchestrator | 2026-01-05 02:58:59.480939 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-01-05 02:58:59.480943 | orchestrator | Monday 05 January 2026 02:58:43 +0000 (0:00:32.687) 0:02:18.979 ******** 2026-01-05 02:58:59.480947 | orchestrator | ok: [testbed-manager] 2026-01-05 02:58:59.480951 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:58:59.480955 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:58:59.480959 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:58:59.480963 | orchestrator | ok: [testbed-node-3] 2026-01-05 02:58:59.480967 | orchestrator | ok: [testbed-node-5] 2026-01-05 02:58:59.480971 | orchestrator | ok: [testbed-node-4] 2026-01-05 02:58:59.480974 | orchestrator | 2026-01-05 02:58:59.480978 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-01-05 02:58:59.480982 | orchestrator | Monday 05 January 2026 02:58:46 +0000 (0:00:03.130) 0:02:22.110 ******** 2026-01-05 02:58:59.480986 | orchestrator | changed: [testbed-manager] 2026-01-05 02:58:59.480990 | orchestrator | changed: [testbed-node-3] 2026-01-05 02:58:59.480994 | orchestrator | changed: [testbed-node-0] 2026-01-05 02:58:59.481002 | orchestrator | changed: [testbed-node-4] 2026-01-05 02:58:59.481005 | orchestrator | changed: [testbed-node-5] 2026-01-05 02:58:59.481009 | orchestrator | changed: [testbed-node-1] 2026-01-05 02:58:59.481013 | orchestrator | changed: [testbed-node-2] 2026-01-05 02:58:59.481017 | orchestrator | 2026-01-05 02:58:59.481021 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 02:58:59.481026 | orchestrator | testbed-manager : ok=20  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-05 02:58:59.481032 | orchestrator | testbed-node-0 : ok=16  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-05 02:58:59.481036 | orchestrator | testbed-node-1 : ok=16  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-05 02:58:59.481040 | orchestrator | testbed-node-2 : ok=16  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-05 02:58:59.481044 | orchestrator | testbed-node-3 : ok=16  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-05 02:58:59.481048 | orchestrator | testbed-node-4 : ok=16  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-05 02:58:59.481052 | orchestrator | testbed-node-5 : ok=16  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-05 02:58:59.481055 | orchestrator | 2026-01-05 02:58:59.481059 | orchestrator | 2026-01-05 02:58:59.481077 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 02:58:59.481081 | orchestrator | Monday 05 January 2026 02:58:58 +0000 (0:00:12.704) 0:02:34.815 ******** 2026-01-05 02:58:59.481088 | orchestrator | =============================================================================== 2026-01-05 02:58:59.481092 | orchestrator | common : Restart fluentd container ------------------------------------- 32.75s 2026-01-05 02:58:59.481096 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 32.69s 2026-01-05 02:58:59.481100 | orchestrator | common : Restart cron container ---------------------------------------- 12.70s 2026-01-05 02:58:59.481104 | orchestrator | common : include_tasks -------------------------------------------------- 6.02s 2026-01-05 02:58:59.481108 | orchestrator | common : Copying over config.json files for services -------------------- 4.97s 2026-01-05 02:58:59.481111 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.83s 2026-01-05 02:58:59.481115 | orchestrator | common : Check common containers ---------------------------------------- 4.66s 2026-01-05 02:58:59.481119 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.29s 2026-01-05 02:58:59.481123 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 3.91s 2026-01-05 02:58:59.481128 | orchestrator | common : Flush handlers ------------------------------------------------- 3.66s 2026-01-05 02:58:59.481132 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.62s 2026-01-05 02:58:59.481136 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.21s 2026-01-05 02:58:59.481141 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.14s 2026-01-05 02:58:59.481145 | orchestrator | common : Initializing toolbox container using normal user --------------- 3.13s 2026-01-05 02:58:59.481150 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.12s 2026-01-05 02:58:59.481154 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.08s 2026-01-05 02:58:59.481159 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.00s 2026-01-05 02:58:59.481163 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 2.78s 2026-01-05 02:58:59.481174 | orchestrator | common : Find custom fluentd input config files ------------------------- 2.69s 2026-01-05 02:58:59.481181 | orchestrator | common : include_tasks -------------------------------------------------- 2.61s 2026-01-05 02:58:59.866398 | orchestrator | + osism apply -a upgrade loadbalancer 2026-01-05 02:59:02.064497 | orchestrator | 2026-01-05 02:59:02 | INFO  | Task 8e5a1617-052d-457d-bed2-c4a17f2fad74 (loadbalancer) was prepared for execution. 2026-01-05 02:59:02.064642 | orchestrator | 2026-01-05 02:59:02 | INFO  | It takes a moment until task 8e5a1617-052d-457d-bed2-c4a17f2fad74 (loadbalancer) has been started and output is visible here. 2026-01-05 02:59:23.564366 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-01-05 02:59:23.564519 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-01-05 02:59:23.564628 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-01-05 02:59:23.564649 | orchestrator | (): 'NoneType' object is not subscriptable 2026-01-05 02:59:23.564690 | orchestrator | 2026-01-05 02:59:23.564709 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 02:59:23.564721 | orchestrator | 2026-01-05 02:59:23.564733 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 02:59:23.564744 | orchestrator | Monday 05 January 2026 02:59:07 +0000 (0:00:01.072) 0:00:01.072 ******** 2026-01-05 02:59:23.564755 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:59:23.564767 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:59:23.564778 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:59:23.564790 | orchestrator | 2026-01-05 02:59:23.564801 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 02:59:23.564812 | orchestrator | Monday 05 January 2026 02:59:08 +0000 (0:00:00.910) 0:00:01.982 ******** 2026-01-05 02:59:23.564823 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-01-05 02:59:23.564872 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-01-05 02:59:23.564885 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-01-05 02:59:23.564898 | orchestrator | 2026-01-05 02:59:23.564911 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-01-05 02:59:23.564924 | orchestrator | 2026-01-05 02:59:23.564937 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-01-05 02:59:23.564950 | orchestrator | Monday 05 January 2026 02:59:09 +0000 (0:00:00.918) 0:00:02.901 ******** 2026-01-05 02:59:23.564963 | orchestrator | included: /ansible/roles/loadbalancer/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 02:59:23.564976 | orchestrator | 2026-01-05 02:59:23.564989 | orchestrator | TASK [loadbalancer : Stop and remove containers for haproxy exporter containers] *** 2026-01-05 02:59:23.565002 | orchestrator | Monday 05 January 2026 02:59:10 +0000 (0:00:01.186) 0:00:04.087 ******** 2026-01-05 02:59:23.565015 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:59:23.565029 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:59:23.565049 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:59:23.565067 | orchestrator | 2026-01-05 02:59:23.565086 | orchestrator | TASK [loadbalancer : Removing config for haproxy exporter] ********************* 2026-01-05 02:59:23.565103 | orchestrator | Monday 05 January 2026 02:59:11 +0000 (0:00:01.266) 0:00:05.354 ******** 2026-01-05 02:59:23.565121 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:59:23.565140 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:59:23.565160 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:59:23.565178 | orchestrator | 2026-01-05 02:59:23.565220 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-01-05 02:59:23.565241 | orchestrator | Monday 05 January 2026 02:59:13 +0000 (0:00:01.147) 0:00:06.502 ******** 2026-01-05 02:59:23.565308 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:59:23.565326 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:59:23.565344 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:59:23.565364 | orchestrator | 2026-01-05 02:59:23.565383 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-01-05 02:59:23.565401 | orchestrator | Monday 05 January 2026 02:59:13 +0000 (0:00:00.700) 0:00:07.202 ******** 2026-01-05 02:59:23.565420 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 02:59:23.565438 | orchestrator | 2026-01-05 02:59:23.565457 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-01-05 02:59:23.565476 | orchestrator | Monday 05 January 2026 02:59:14 +0000 (0:00:01.252) 0:00:08.455 ******** 2026-01-05 02:59:23.565497 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:59:23.565516 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:59:23.565534 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:59:23.565553 | orchestrator | 2026-01-05 02:59:23.565571 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-01-05 02:59:23.565590 | orchestrator | Monday 05 January 2026 02:59:15 +0000 (0:00:00.683) 0:00:09.138 ******** 2026-01-05 02:59:23.565609 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-05 02:59:23.565629 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-05 02:59:23.565647 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-05 02:59:23.565666 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-05 02:59:23.565685 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-05 02:59:23.565703 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-05 02:59:23.565721 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-05 02:59:23.565741 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-05 02:59:23.565758 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-05 02:59:23.565776 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-05 02:59:23.565794 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-05 02:59:23.565962 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-05 02:59:23.565993 | orchestrator | 2026-01-05 02:59:23.566012 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-01-05 02:59:23.566116 | orchestrator | Monday 05 January 2026 02:59:18 +0000 (0:00:02.526) 0:00:11.665 ******** 2026-01-05 02:59:23.566135 | orchestrator | ok: [testbed-node-1] => (item=ip_vs) 2026-01-05 02:59:23.566153 | orchestrator | ok: [testbed-node-0] => (item=ip_vs) 2026-01-05 02:59:23.566173 | orchestrator | ok: [testbed-node-2] => (item=ip_vs) 2026-01-05 02:59:23.566192 | orchestrator | 2026-01-05 02:59:23.566209 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-01-05 02:59:23.566224 | orchestrator | Monday 05 January 2026 02:59:19 +0000 (0:00:01.017) 0:00:12.682 ******** 2026-01-05 02:59:23.566235 | orchestrator | ok: [testbed-node-1] => (item=ip_vs) 2026-01-05 02:59:23.566246 | orchestrator | ok: [testbed-node-0] => (item=ip_vs) 2026-01-05 02:59:23.566257 | orchestrator | ok: [testbed-node-2] => (item=ip_vs) 2026-01-05 02:59:23.566267 | orchestrator | 2026-01-05 02:59:23.566278 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-01-05 02:59:23.566289 | orchestrator | Monday 05 January 2026 02:59:20 +0000 (0:00:01.346) 0:00:14.028 ******** 2026-01-05 02:59:23.566300 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-01-05 02:59:23.566311 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:59:23.566322 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-01-05 02:59:23.566346 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:59:23.566357 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-01-05 02:59:23.566368 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:59:23.566379 | orchestrator | 2026-01-05 02:59:23.566389 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-01-05 02:59:23.566400 | orchestrator | Monday 05 January 2026 02:59:21 +0000 (0:00:01.185) 0:00:15.214 ******** 2026-01-05 02:59:23.566418 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-05 02:59:23.566435 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-05 02:59:23.566446 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-05 02:59:23.566456 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 02:59:23.566481 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 02:59:29.915625 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 02:59:29.915769 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-05 02:59:29.915785 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-05 02:59:29.915798 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-05 02:59:29.915807 | orchestrator | 2026-01-05 02:59:29.915817 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-01-05 02:59:29.915826 | orchestrator | Monday 05 January 2026 02:59:23 +0000 (0:00:01.811) 0:00:17.025 ******** 2026-01-05 02:59:29.915910 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:59:29.915919 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:59:29.915926 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:59:29.915933 | orchestrator | 2026-01-05 02:59:29.915941 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-01-05 02:59:29.915948 | orchestrator | Monday 05 January 2026 02:59:24 +0000 (0:00:01.002) 0:00:18.028 ******** 2026-01-05 02:59:29.915956 | orchestrator | ok: [testbed-node-0] => (item=users) 2026-01-05 02:59:29.915964 | orchestrator | ok: [testbed-node-1] => (item=users) 2026-01-05 02:59:29.915971 | orchestrator | ok: [testbed-node-2] => (item=users) 2026-01-05 02:59:29.915979 | orchestrator | ok: [testbed-node-0] => (item=rules) 2026-01-05 02:59:29.915986 | orchestrator | ok: [testbed-node-1] => (item=rules) 2026-01-05 02:59:29.915993 | orchestrator | ok: [testbed-node-2] => (item=rules) 2026-01-05 02:59:29.916000 | orchestrator | 2026-01-05 02:59:29.916008 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-01-05 02:59:29.916015 | orchestrator | Monday 05 January 2026 02:59:26 +0000 (0:00:01.926) 0:00:19.954 ******** 2026-01-05 02:59:29.916022 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:59:29.916029 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:59:29.916037 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:59:29.916044 | orchestrator | 2026-01-05 02:59:29.916051 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-01-05 02:59:29.916058 | orchestrator | Monday 05 January 2026 02:59:27 +0000 (0:00:01.480) 0:00:21.434 ******** 2026-01-05 02:59:29.916066 | orchestrator | ok: [testbed-node-0] 2026-01-05 02:59:29.916073 | orchestrator | ok: [testbed-node-1] 2026-01-05 02:59:29.916080 | orchestrator | ok: [testbed-node-2] 2026-01-05 02:59:29.916087 | orchestrator | 2026-01-05 02:59:29.916095 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-01-05 02:59:29.916110 | orchestrator | Monday 05 January 2026 02:59:29 +0000 (0:00:01.277) 0:00:22.712 ******** 2026-01-05 02:59:29.916134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-05 02:59:29.916143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 02:59:29.916152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 02:59:29.916165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__eabc99f207b4c7e6908eaa0d4124f90dac9c2c9c', '__omit_place_holder__eabc99f207b4c7e6908eaa0d4124f90dac9c2c9c'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-05 02:59:29.916173 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:59:29.916181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-05 02:59:29.916189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 02:59:29.916202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 02:59:29.916227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__eabc99f207b4c7e6908eaa0d4124f90dac9c2c9c', '__omit_place_holder__eabc99f207b4c7e6908eaa0d4124f90dac9c2c9c'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-05 02:59:33.193891 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:59:33.193965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-05 02:59:33.193974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 02:59:33.193993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 02:59:33.193999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__eabc99f207b4c7e6908eaa0d4124f90dac9c2c9c', '__omit_place_holder__eabc99f207b4c7e6908eaa0d4124f90dac9c2c9c'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-05 02:59:33.194003 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:59:33.194007 | orchestrator | 2026-01-05 02:59:33.194012 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-01-05 02:59:33.194084 | orchestrator | Monday 05 January 2026 02:59:29 +0000 (0:00:00.667) 0:00:23.380 ******** 2026-01-05 02:59:33.194090 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-05 02:59:33.194094 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-05 02:59:33.194114 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-05 02:59:33.194119 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 02:59:33.194126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 02:59:33.194130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__eabc99f207b4c7e6908eaa0d4124f90dac9c2c9c', '__omit_place_holder__eabc99f207b4c7e6908eaa0d4124f90dac9c2c9c'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-05 02:59:33.194135 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 02:59:33.194143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 02:59:33.194150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__eabc99f207b4c7e6908eaa0d4124f90dac9c2c9c', '__omit_place_holder__eabc99f207b4c7e6908eaa0d4124f90dac9c2c9c'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-05 02:59:41.750148 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 02:59:41.750265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 02:59:41.750299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__eabc99f207b4c7e6908eaa0d4124f90dac9c2c9c', '__omit_place_holder__eabc99f207b4c7e6908eaa0d4124f90dac9c2c9c'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-05 02:59:41.750311 | orchestrator | 2026-01-05 02:59:41.750323 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-01-05 02:59:41.750331 | orchestrator | Monday 05 January 2026 02:59:33 +0000 (0:00:03.275) 0:00:26.655 ******** 2026-01-05 02:59:41.750338 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-05 02:59:41.750362 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-05 02:59:41.750368 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-05 02:59:41.750389 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 02:59:41.750395 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 02:59:41.750401 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 02:59:41.750408 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-05 02:59:41.750420 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-05 02:59:41.750431 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-05 02:59:41.750437 | orchestrator | 2026-01-05 02:59:41.750444 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-01-05 02:59:41.750449 | orchestrator | Monday 05 January 2026 02:59:36 +0000 (0:00:03.614) 0:00:30.270 ******** 2026-01-05 02:59:41.750456 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-05 02:59:41.750463 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-05 02:59:41.750469 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-05 02:59:41.750474 | orchestrator | 2026-01-05 02:59:41.750480 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-01-05 02:59:41.750486 | orchestrator | Monday 05 January 2026 02:59:38 +0000 (0:00:01.707) 0:00:31.977 ******** 2026-01-05 02:59:41.750492 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-05 02:59:41.750499 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-05 02:59:41.750509 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-05 02:59:55.861390 | orchestrator | 2026-01-05 02:59:55.861488 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-01-05 02:59:55.861496 | orchestrator | Monday 05 January 2026 02:59:41 +0000 (0:00:03.240) 0:00:35.218 ******** 2026-01-05 02:59:55.861501 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:59:55.861507 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:59:55.861511 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:59:55.861515 | orchestrator | 2026-01-05 02:59:55.861520 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-01-05 02:59:55.861524 | orchestrator | Monday 05 January 2026 02:59:42 +0000 (0:00:01.188) 0:00:36.406 ******** 2026-01-05 02:59:55.861529 | orchestrator | ok: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-05 02:59:55.861534 | orchestrator | ok: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-05 02:59:55.861538 | orchestrator | ok: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-05 02:59:55.861542 | orchestrator | 2026-01-05 02:59:55.861546 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-01-05 02:59:55.861550 | orchestrator | Monday 05 January 2026 02:59:44 +0000 (0:00:02.044) 0:00:38.451 ******** 2026-01-05 02:59:55.861570 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-05 02:59:55.861575 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-05 02:59:55.861579 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-05 02:59:55.861583 | orchestrator | 2026-01-05 02:59:55.861598 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-01-05 02:59:55.861602 | orchestrator | Monday 05 January 2026 02:59:46 +0000 (0:00:01.847) 0:00:40.299 ******** 2026-01-05 02:59:55.861607 | orchestrator | ok: [testbed-node-0] => (item=haproxy.pem) 2026-01-05 02:59:55.861612 | orchestrator | ok: [testbed-node-1] => (item=haproxy.pem) 2026-01-05 02:59:55.861616 | orchestrator | ok: [testbed-node-2] => (item=haproxy.pem) 2026-01-05 02:59:55.861620 | orchestrator | 2026-01-05 02:59:55.861623 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-01-05 02:59:55.861627 | orchestrator | Monday 05 January 2026 02:59:48 +0000 (0:00:01.591) 0:00:41.891 ******** 2026-01-05 02:59:55.861631 | orchestrator | ok: [testbed-node-0] => (item=haproxy-internal.pem) 2026-01-05 02:59:55.861635 | orchestrator | ok: [testbed-node-1] => (item=haproxy-internal.pem) 2026-01-05 02:59:55.861639 | orchestrator | ok: [testbed-node-2] => (item=haproxy-internal.pem) 2026-01-05 02:59:55.861643 | orchestrator | 2026-01-05 02:59:55.861647 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-01-05 02:59:55.861651 | orchestrator | Monday 05 January 2026 02:59:50 +0000 (0:00:01.851) 0:00:43.742 ******** 2026-01-05 02:59:55.861656 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 02:59:55.861660 | orchestrator | 2026-01-05 02:59:55.861664 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-01-05 02:59:55.861668 | orchestrator | Monday 05 January 2026 02:59:51 +0000 (0:00:01.014) 0:00:44.757 ******** 2026-01-05 02:59:55.861675 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-05 02:59:55.861682 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-05 02:59:55.861699 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-05 02:59:55.861708 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 02:59:55.861716 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 02:59:55.861721 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 02:59:55.861725 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-05 02:59:55.861732 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-05 02:59:55.861739 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-05 02:59:55.861745 | orchestrator | 2026-01-05 02:59:55.861751 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-01-05 02:59:55.861758 | orchestrator | Monday 05 January 2026 02:59:54 +0000 (0:00:03.538) 0:00:48.296 ******** 2026-01-05 02:59:55.861771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-05 02:59:56.764891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 02:59:56.765004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 02:59:56.765019 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:59:56.765030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-05 02:59:56.765040 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 02:59:56.765049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 02:59:56.765057 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:59:56.765066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-05 02:59:56.765113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 02:59:56.765123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 02:59:56.765131 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:59:56.765140 | orchestrator | 2026-01-05 02:59:56.765149 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-01-05 02:59:56.765158 | orchestrator | Monday 05 January 2026 02:59:55 +0000 (0:00:01.034) 0:00:49.331 ******** 2026-01-05 02:59:56.765172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-05 02:59:56.765181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 02:59:56.765189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 02:59:56.765198 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:59:56.765206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-05 02:59:56.765221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 02:59:56.765236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 02:59:58.618831 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:59:58.619000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-05 02:59:58.619027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 02:59:58.619045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 02:59:58.619061 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:59:58.619076 | orchestrator | 2026-01-05 02:59:58.619091 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-01-05 02:59:58.619106 | orchestrator | Monday 05 January 2026 02:59:56 +0000 (0:00:00.898) 0:00:50.229 ******** 2026-01-05 02:59:58.619121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-05 02:59:58.619164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 02:59:58.619180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 02:59:58.619195 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:59:58.619232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-05 02:59:58.619248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 02:59:58.619264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 02:59:58.619279 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:59:58.619294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-05 02:59:58.619308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 02:59:58.619332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 02:59:58.619347 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:59:58.619362 | orchestrator | 2026-01-05 02:59:58.619377 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-01-05 02:59:58.619393 | orchestrator | Monday 05 January 2026 02:59:57 +0000 (0:00:01.192) 0:00:51.422 ******** 2026-01-05 02:59:58.619462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-05 02:59:59.486950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 02:59:59.487039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 02:59:59.487049 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:59:59.487058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-05 02:59:59.487088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 02:59:59.487094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 02:59:59.487100 | orchestrator | skipping: [testbed-node-1] 2026-01-05 02:59:59.487106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-05 02:59:59.487127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 02:59:59.487139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 02:59:59.487145 | orchestrator | skipping: [testbed-node-2] 2026-01-05 02:59:59.487152 | orchestrator | 2026-01-05 02:59:59.487158 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-01-05 02:59:59.487166 | orchestrator | Monday 05 January 2026 02:59:58 +0000 (0:00:00.670) 0:00:52.092 ******** 2026-01-05 02:59:59.487172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-05 02:59:59.487183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 02:59:59.487189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 02:59:59.487195 | orchestrator | skipping: [testbed-node-0] 2026-01-05 02:59:59.487201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-05 02:59:59.487207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 02:59:59.487219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 03:00:01.262329 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:00:01.262431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-05 03:00:01.262443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 03:00:01.262473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 03:00:01.262481 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:00:01.262488 | orchestrator | 2026-01-05 03:00:01.262495 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-01-05 03:00:01.262503 | orchestrator | Monday 05 January 2026 02:59:59 +0000 (0:00:00.864) 0:00:52.956 ******** 2026-01-05 03:00:01.262510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-05 03:00:01.262517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 03:00:01.262523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 03:00:01.262530 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:00:01.262573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-05 03:00:01.262580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 03:00:01.262591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 03:00:01.262598 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:00:01.262605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-05 03:00:01.262611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 03:00:01.262618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 03:00:01.262624 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:00:01.262631 | orchestrator | 2026-01-05 03:00:01.262638 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-01-05 03:00:01.262645 | orchestrator | Monday 05 January 2026 03:00:00 +0000 (0:00:01.057) 0:00:54.013 ******** 2026-01-05 03:00:01.262655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-05 03:00:02.140928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 03:00:02.141026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 03:00:02.141035 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:00:02.141047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-05 03:00:02.141062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 03:00:02.141069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 03:00:02.141075 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:00:02.141081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-05 03:00:02.141108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 03:00:02.141120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 03:00:02.141185 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:00:02.141193 | orchestrator | 2026-01-05 03:00:02.141200 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-01-05 03:00:02.141208 | orchestrator | Monday 05 January 2026 03:00:01 +0000 (0:00:00.720) 0:00:54.734 ******** 2026-01-05 03:00:02.141213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-05 03:00:02.141219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 03:00:02.141225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 03:00:02.141230 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:00:02.141236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-05 03:00:02.141242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 03:00:02.141295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 03:00:10.735431 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:00:10.735528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-05 03:00:10.735538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-05 03:00:10.735543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-05 03:00:10.735548 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:00:10.735553 | orchestrator | 2026-01-05 03:00:10.735558 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-01-05 03:00:10.735562 | orchestrator | Monday 05 January 2026 03:00:02 +0000 (0:00:00.874) 0:00:55.609 ******** 2026-01-05 03:00:10.735567 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-05 03:00:10.735573 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-05 03:00:10.735576 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-05 03:00:10.735580 | orchestrator | 2026-01-05 03:00:10.735584 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-01-05 03:00:10.735588 | orchestrator | Monday 05 January 2026 03:00:04 +0000 (0:00:01.885) 0:00:57.494 ******** 2026-01-05 03:00:10.735592 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-05 03:00:10.735596 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-05 03:00:10.735599 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-05 03:00:10.735620 | orchestrator | 2026-01-05 03:00:10.735624 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-01-05 03:00:10.735628 | orchestrator | Monday 05 January 2026 03:00:05 +0000 (0:00:01.571) 0:00:59.066 ******** 2026-01-05 03:00:10.735632 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-05 03:00:10.735636 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-05 03:00:10.735639 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-05 03:00:10.735643 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-05 03:00:10.735647 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:00:10.735651 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-05 03:00:10.735655 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:00:10.735659 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-05 03:00:10.735662 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:00:10.735666 | orchestrator | 2026-01-05 03:00:10.735670 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-01-05 03:00:10.735674 | orchestrator | Monday 05 January 2026 03:00:06 +0000 (0:00:01.275) 0:01:00.342 ******** 2026-01-05 03:00:10.735702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-05 03:00:10.735707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-05 03:00:10.735712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-05 03:00:10.735716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 03:00:10.735724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 03:00:10.735728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-05 03:00:10.735739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-05 03:00:15.839540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-05 03:00:15.839629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-05 03:00:15.839641 | orchestrator | 2026-01-05 03:00:15.839649 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-01-05 03:00:15.839656 | orchestrator | Monday 05 January 2026 03:00:10 +0000 (0:00:03.855) 0:01:04.197 ******** 2026-01-05 03:00:15.839663 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 03:00:15.839668 | orchestrator | 2026-01-05 03:00:15.839674 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-01-05 03:00:15.839680 | orchestrator | Monday 05 January 2026 03:00:11 +0000 (0:00:01.093) 0:01:05.290 ******** 2026-01-05 03:00:15.839688 | orchestrator | ok: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-05 03:00:15.839721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-05 03:00:15.839729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-05 03:00:15.839751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-05 03:00:15.839794 | orchestrator | ok: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-05 03:00:15.839802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-05 03:00:15.839809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-05 03:00:15.839822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-05 03:00:15.839829 | orchestrator | ok: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-05 03:00:15.839840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-05 03:00:15.839852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-05 03:00:16.556195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-05 03:00:16.556290 | orchestrator | 2026-01-05 03:00:16.556298 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-01-05 03:00:16.556304 | orchestrator | Monday 05 January 2026 03:00:15 +0000 (0:00:04.012) 0:01:09.302 ******** 2026-01-05 03:00:16.556310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-05 03:00:16.556332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-05 03:00:16.556337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-05 03:00:16.556341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-05 03:00:16.556345 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:00:16.556372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-05 03:00:16.556377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-05 03:00:16.556381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-05 03:00:16.556389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-05 03:00:16.556393 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:00:16.556397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-05 03:00:16.556401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-05 03:00:16.556408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-05 03:00:16.556417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-05 03:00:26.555314 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:00:26.555403 | orchestrator | 2026-01-05 03:00:26.555412 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-01-05 03:00:26.555420 | orchestrator | Monday 05 January 2026 03:00:16 +0000 (0:00:00.714) 0:01:10.017 ******** 2026-01-05 03:00:26.555450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-01-05 03:00:26.555461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-01-05 03:00:26.555480 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:00:26.555486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-01-05 03:00:26.555492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-01-05 03:00:26.555499 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:00:26.555504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-01-05 03:00:26.555510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-01-05 03:00:26.555516 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:00:26.555522 | orchestrator | 2026-01-05 03:00:26.555528 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-01-05 03:00:26.555535 | orchestrator | Monday 05 January 2026 03:00:17 +0000 (0:00:01.430) 0:01:11.448 ******** 2026-01-05 03:00:26.555541 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:00:26.555548 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:00:26.555554 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:00:26.555560 | orchestrator | 2026-01-05 03:00:26.555566 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-01-05 03:00:26.555570 | orchestrator | Monday 05 January 2026 03:00:19 +0000 (0:00:01.775) 0:01:13.223 ******** 2026-01-05 03:00:26.555574 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:00:26.555578 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:00:26.555582 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:00:26.555585 | orchestrator | 2026-01-05 03:00:26.555589 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-01-05 03:00:26.555593 | orchestrator | Monday 05 January 2026 03:00:21 +0000 (0:00:02.200) 0:01:15.424 ******** 2026-01-05 03:00:26.555597 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 03:00:26.555601 | orchestrator | 2026-01-05 03:00:26.555605 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-01-05 03:00:26.555609 | orchestrator | Monday 05 January 2026 03:00:22 +0000 (0:00:00.747) 0:01:16.172 ******** 2026-01-05 03:00:26.555628 | orchestrator | ok: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-05 03:00:26.555637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-05 03:00:26.555667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-05 03:00:26.555676 | orchestrator | ok: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-05 03:00:26.555680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-05 03:00:26.555686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-05 03:00:26.555696 | orchestrator | ok: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-05 03:00:26.555712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-05 03:00:28.254337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-05 03:00:28.254480 | orchestrator | 2026-01-05 03:00:28.254571 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-01-05 03:00:28.254590 | orchestrator | Monday 05 January 2026 03:00:26 +0000 (0:00:03.843) 0:01:20.015 ******** 2026-01-05 03:00:28.254606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-05 03:00:28.254623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-05 03:00:28.254637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-05 03:00:28.254680 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:00:28.254691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-05 03:00:28.254758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-05 03:00:28.254769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-05 03:00:28.254777 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:00:28.254785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-05 03:00:28.254793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-05 03:00:28.254812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-05 03:00:28.254822 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:00:28.254835 | orchestrator | 2026-01-05 03:00:28.254854 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-01-05 03:00:28.254866 | orchestrator | Monday 05 January 2026 03:00:27 +0000 (0:00:00.748) 0:01:20.764 ******** 2026-01-05 03:00:28.254878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-05 03:00:28.254894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-05 03:00:28.254908 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:00:28.254929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-05 03:00:40.164174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-05 03:00:40.165367 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:00:40.165449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-05 03:00:40.165468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-05 03:00:40.165483 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:00:40.165496 | orchestrator | 2026-01-05 03:00:40.165509 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-01-05 03:00:40.165524 | orchestrator | Monday 05 January 2026 03:00:28 +0000 (0:00:00.959) 0:01:21.723 ******** 2026-01-05 03:00:40.165536 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:00:40.165548 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:00:40.165560 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:00:40.165572 | orchestrator | 2026-01-05 03:00:40.165584 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-01-05 03:00:40.165596 | orchestrator | Monday 05 January 2026 03:00:29 +0000 (0:00:01.753) 0:01:23.476 ******** 2026-01-05 03:00:40.165606 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:00:40.165614 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:00:40.165621 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:00:40.165628 | orchestrator | 2026-01-05 03:00:40.165636 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-01-05 03:00:40.165644 | orchestrator | Monday 05 January 2026 03:00:32 +0000 (0:00:02.202) 0:01:25.679 ******** 2026-01-05 03:00:40.165677 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:00:40.165690 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:00:40.165703 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:00:40.165715 | orchestrator | 2026-01-05 03:00:40.165727 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-01-05 03:00:40.165770 | orchestrator | Monday 05 January 2026 03:00:32 +0000 (0:00:00.343) 0:01:26.022 ******** 2026-01-05 03:00:40.165784 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 03:00:40.165795 | orchestrator | 2026-01-05 03:00:40.165807 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-01-05 03:00:40.165819 | orchestrator | Monday 05 January 2026 03:00:33 +0000 (0:00:00.966) 0:01:26.988 ******** 2026-01-05 03:00:40.165837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-01-05 03:00:40.165867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-01-05 03:00:40.165903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-01-05 03:00:40.165911 | orchestrator | 2026-01-05 03:00:40.165919 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-01-05 03:00:40.165927 | orchestrator | Monday 05 January 2026 03:00:36 +0000 (0:00:02.771) 0:01:29.760 ******** 2026-01-05 03:00:40.165935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-01-05 03:00:40.165951 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:00:40.165958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-01-05 03:00:40.165966 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:00:40.166009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-01-05 03:00:40.166065 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:00:40.166073 | orchestrator | 2026-01-05 03:00:40.166081 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-01-05 03:00:40.166088 | orchestrator | Monday 05 January 2026 03:00:37 +0000 (0:00:01.644) 0:01:31.405 ******** 2026-01-05 03:00:40.166098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-05 03:00:40.166116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-05 03:00:47.227990 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:00:47.228093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-05 03:00:47.228110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-05 03:00:47.228135 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:00:47.228140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-05 03:00:47.228146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-05 03:00:47.228152 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:00:47.228158 | orchestrator | 2026-01-05 03:00:47.228170 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-01-05 03:00:47.228177 | orchestrator | Monday 05 January 2026 03:00:40 +0000 (0:00:02.223) 0:01:33.629 ******** 2026-01-05 03:00:47.228183 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:00:47.228189 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:00:47.228195 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:00:47.228201 | orchestrator | 2026-01-05 03:00:47.228207 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-01-05 03:00:47.228214 | orchestrator | Monday 05 January 2026 03:00:40 +0000 (0:00:00.551) 0:01:34.180 ******** 2026-01-05 03:00:47.228220 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:00:47.228226 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:00:47.228232 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:00:47.228238 | orchestrator | 2026-01-05 03:00:47.228244 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-01-05 03:00:47.228252 | orchestrator | Monday 05 January 2026 03:00:42 +0000 (0:00:01.408) 0:01:35.589 ******** 2026-01-05 03:00:47.228258 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 03:00:47.228262 | orchestrator | 2026-01-05 03:00:47.228266 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-01-05 03:00:47.228272 | orchestrator | Monday 05 January 2026 03:00:43 +0000 (0:00:01.035) 0:01:36.624 ******** 2026-01-05 03:00:47.228296 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-05 03:00:47.228330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 03:00:47.228345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-05 03:00:47.228353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-05 03:00:47.228360 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-05 03:00:47.228371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 03:00:47.228377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-05 03:00:47.228390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-05 03:00:47.973948 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-05 03:00:47.974084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 03:00:47.974102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-05 03:00:47.974123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-05 03:00:47.974131 | orchestrator | 2026-01-05 03:00:47.974138 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-01-05 03:00:47.974145 | orchestrator | Monday 05 January 2026 03:00:47 +0000 (0:00:04.189) 0:01:40.813 ******** 2026-01-05 03:00:47.974152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-05 03:00:47.974194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 03:00:47.974201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-05 03:00:47.974207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-05 03:00:47.974213 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:00:47.974224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-05 03:00:47.974230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 03:00:47.974241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-05 03:00:47.974252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-05 03:00:58.944617 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:00:58.944734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-05 03:00:58.944748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 03:00:58.944771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-05 03:00:58.944780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-05 03:00:58.944804 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:00:58.944812 | orchestrator | 2026-01-05 03:00:58.944819 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-01-05 03:00:58.944826 | orchestrator | Monday 05 January 2026 03:00:48 +0000 (0:00:00.727) 0:01:41.540 ******** 2026-01-05 03:00:58.944835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-05 03:00:58.944844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-05 03:00:58.944852 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:00:58.944859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-05 03:00:58.944865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-05 03:00:58.944871 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:00:58.944891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-05 03:00:58.944898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-05 03:00:58.944904 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:00:58.944911 | orchestrator | 2026-01-05 03:00:58.944917 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-01-05 03:00:58.944923 | orchestrator | Monday 05 January 2026 03:00:49 +0000 (0:00:01.528) 0:01:43.068 ******** 2026-01-05 03:00:58.944930 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:00:58.944937 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:00:58.944943 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:00:58.944950 | orchestrator | 2026-01-05 03:00:58.944956 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-01-05 03:00:58.944962 | orchestrator | Monday 05 January 2026 03:00:51 +0000 (0:00:01.460) 0:01:44.529 ******** 2026-01-05 03:00:58.944969 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:00:58.944975 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:00:58.944981 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:00:58.944987 | orchestrator | 2026-01-05 03:00:58.944993 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-01-05 03:00:58.944999 | orchestrator | Monday 05 January 2026 03:00:53 +0000 (0:00:02.174) 0:01:46.704 ******** 2026-01-05 03:00:58.945006 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:00:58.945012 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:00:58.945018 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:00:58.945024 | orchestrator | 2026-01-05 03:00:58.945030 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-01-05 03:00:58.945037 | orchestrator | Monday 05 January 2026 03:00:53 +0000 (0:00:00.335) 0:01:47.039 ******** 2026-01-05 03:00:58.945043 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:00:58.945055 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:00:58.945061 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:00:58.945067 | orchestrator | 2026-01-05 03:00:58.945073 | orchestrator | TASK [include_role : designate] ************************************************ 2026-01-05 03:00:58.945080 | orchestrator | Monday 05 January 2026 03:00:54 +0000 (0:00:00.578) 0:01:47.618 ******** 2026-01-05 03:00:58.945086 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 03:00:58.945092 | orchestrator | 2026-01-05 03:00:58.945099 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-01-05 03:00:58.945105 | orchestrator | Monday 05 January 2026 03:00:54 +0000 (0:00:00.852) 0:01:48.471 ******** 2026-01-05 03:00:58.945112 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-05 03:00:58.945121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-05 03:00:58.945135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-05 03:00:58.945148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-05 03:00:59.523444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-05 03:00:59.523636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-05 03:00:59.523671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-05 03:00:59.523688 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-05 03:00:59.523704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-05 03:00:59.523719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-05 03:00:59.523752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-05 03:00:59.523776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-05 03:00:59.523797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-05 03:00:59.523813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-05 03:00:59.523827 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-05 03:00:59.523841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-05 03:00:59.523866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-05 03:01:00.221589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-05 03:01:00.221698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-05 03:01:00.221712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-05 03:01:00.221719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-05 03:01:00.221724 | orchestrator | 2026-01-05 03:01:00.221731 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-01-05 03:01:00.221736 | orchestrator | Monday 05 January 2026 03:00:59 +0000 (0:00:04.509) 0:01:52.980 ******** 2026-01-05 03:01:00.221742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-05 03:01:00.221750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-05 03:01:00.221792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-05 03:01:00.221806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-05 03:01:00.221818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-05 03:01:00.221825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-05 03:01:00.221832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-05 03:01:00.221840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-05 03:01:00.221856 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:01:00.221871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-05 03:01:01.501526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-05 03:01:01.501720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-05 03:01:01.501732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-05 03:01:01.501741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-05 03:01:01.501749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-05 03:01:01.501776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-05 03:01:01.501784 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:01:01.501809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-05 03:01:01.501821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-05 03:01:01.501828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-05 03:01:01.501835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-05 03:01:01.501841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-05 03:01:01.501848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-05 03:01:01.501865 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:01:01.501882 | orchestrator | 2026-01-05 03:01:01.501895 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-01-05 03:01:01.501907 | orchestrator | Monday 05 January 2026 03:01:00 +0000 (0:00:00.923) 0:01:53.904 ******** 2026-01-05 03:01:01.501919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-01-05 03:01:01.501932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-01-05 03:01:01.501944 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:01:01.501963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-01-05 03:01:11.207386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-01-05 03:01:11.207479 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:01:11.207487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-01-05 03:01:11.207495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-01-05 03:01:11.207557 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:01:11.207563 | orchestrator | 2026-01-05 03:01:11.207569 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-01-05 03:01:11.207574 | orchestrator | Monday 05 January 2026 03:01:01 +0000 (0:00:01.066) 0:01:54.971 ******** 2026-01-05 03:01:11.207579 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:01:11.207585 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:01:11.207589 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:01:11.207594 | orchestrator | 2026-01-05 03:01:11.207599 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-01-05 03:01:11.207604 | orchestrator | Monday 05 January 2026 03:01:03 +0000 (0:00:01.691) 0:01:56.662 ******** 2026-01-05 03:01:11.207608 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:01:11.207613 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:01:11.207618 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:01:11.207622 | orchestrator | 2026-01-05 03:01:11.207627 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-01-05 03:01:11.207632 | orchestrator | Monday 05 January 2026 03:01:05 +0000 (0:00:02.209) 0:01:58.871 ******** 2026-01-05 03:01:11.207636 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:01:11.207641 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:01:11.207645 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:01:11.207650 | orchestrator | 2026-01-05 03:01:11.207655 | orchestrator | TASK [include_role : glance] *************************************************** 2026-01-05 03:01:11.207659 | orchestrator | Monday 05 January 2026 03:01:05 +0000 (0:00:00.366) 0:01:59.238 ******** 2026-01-05 03:01:11.207664 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 03:01:11.207685 | orchestrator | 2026-01-05 03:01:11.207690 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-01-05 03:01:11.207694 | orchestrator | Monday 05 January 2026 03:01:06 +0000 (0:00:01.100) 0:02:00.339 ******** 2026-01-05 03:01:11.207703 | orchestrator | ok: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-05 03:01:11.207728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-05 03:01:11.207735 | orchestrator | ok: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-05 03:01:11.207751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-05 03:01:14.760370 | orchestrator | ok: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-05 03:01:14.760658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-05 03:01:14.760687 | orchestrator | 2026-01-05 03:01:14.760701 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-01-05 03:01:14.760714 | orchestrator | Monday 05 January 2026 03:01:11 +0000 (0:00:04.465) 0:02:04.805 ******** 2026-01-05 03:01:14.760750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-05 03:01:14.760792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-05 03:01:14.760806 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:01:14.760847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-05 03:01:18.638390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-05 03:01:18.638591 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:01:18.638631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-05 03:01:18.638690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-05 03:01:18.638703 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:01:18.638714 | orchestrator | 2026-01-05 03:01:18.638725 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-01-05 03:01:18.638736 | orchestrator | Monday 05 January 2026 03:01:14 +0000 (0:00:03.527) 0:02:08.332 ******** 2026-01-05 03:01:18.638747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-05 03:01:18.638760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-05 03:01:18.638770 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:01:18.638785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-05 03:01:18.638796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-05 03:01:18.638814 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:01:18.638832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-05 03:01:32.666970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-05 03:01:32.667052 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:01:32.667059 | orchestrator | 2026-01-05 03:01:32.667064 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-01-05 03:01:32.667070 | orchestrator | Monday 05 January 2026 03:01:18 +0000 (0:00:03.771) 0:02:12.104 ******** 2026-01-05 03:01:32.667074 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:01:32.667078 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:01:32.667082 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:01:32.667086 | orchestrator | 2026-01-05 03:01:32.667091 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-01-05 03:01:32.667095 | orchestrator | Monday 05 January 2026 03:01:20 +0000 (0:00:01.380) 0:02:13.485 ******** 2026-01-05 03:01:32.667099 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:01:32.667103 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:01:32.667106 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:01:32.667111 | orchestrator | 2026-01-05 03:01:32.667117 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-01-05 03:01:32.667123 | orchestrator | Monday 05 January 2026 03:01:22 +0000 (0:00:02.240) 0:02:15.726 ******** 2026-01-05 03:01:32.667128 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:01:32.667133 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:01:32.667143 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:01:32.667150 | orchestrator | 2026-01-05 03:01:32.667157 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-01-05 03:01:32.667162 | orchestrator | Monday 05 January 2026 03:01:22 +0000 (0:00:00.344) 0:02:16.071 ******** 2026-01-05 03:01:32.667168 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 03:01:32.667174 | orchestrator | 2026-01-05 03:01:32.667180 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-01-05 03:01:32.667186 | orchestrator | Monday 05 January 2026 03:01:23 +0000 (0:00:01.224) 0:02:17.295 ******** 2026-01-05 03:01:32.667193 | orchestrator | ok: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-05 03:01:32.667227 | orchestrator | ok: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-05 03:01:32.667234 | orchestrator | ok: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-05 03:01:32.667240 | orchestrator | 2026-01-05 03:01:32.667247 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-01-05 03:01:32.667254 | orchestrator | Monday 05 January 2026 03:01:27 +0000 (0:00:03.684) 0:02:20.980 ******** 2026-01-05 03:01:32.667274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-05 03:01:32.667279 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:01:32.667283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-05 03:01:32.667288 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:01:32.667291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-05 03:01:32.667295 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:01:32.667303 | orchestrator | 2026-01-05 03:01:32.667307 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-01-05 03:01:32.667311 | orchestrator | Monday 05 January 2026 03:01:27 +0000 (0:00:00.462) 0:02:21.442 ******** 2026-01-05 03:01:32.667316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-01-05 03:01:32.667322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-01-05 03:01:32.667327 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:01:32.667331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-01-05 03:01:32.667337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-01-05 03:01:32.667341 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:01:32.667345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-01-05 03:01:32.667349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-01-05 03:01:32.667352 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:01:32.667356 | orchestrator | 2026-01-05 03:01:32.667360 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-01-05 03:01:32.667364 | orchestrator | Monday 05 January 2026 03:01:28 +0000 (0:00:01.001) 0:02:22.444 ******** 2026-01-05 03:01:32.667368 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:01:32.667371 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:01:32.667375 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:01:32.667379 | orchestrator | 2026-01-05 03:01:32.667383 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-01-05 03:01:32.667387 | orchestrator | Monday 05 January 2026 03:01:30 +0000 (0:00:01.422) 0:02:23.866 ******** 2026-01-05 03:01:32.667390 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:01:32.667395 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:01:32.667398 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:01:32.667402 | orchestrator | 2026-01-05 03:01:32.667406 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-01-05 03:01:32.667436 | orchestrator | Monday 05 January 2026 03:01:32 +0000 (0:00:02.262) 0:02:26.128 ******** 2026-01-05 03:01:39.091505 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:01:39.091595 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:01:39.091605 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:01:39.091611 | orchestrator | 2026-01-05 03:01:39.091619 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-01-05 03:01:39.091627 | orchestrator | Monday 05 January 2026 03:01:33 +0000 (0:00:00.633) 0:02:26.762 ******** 2026-01-05 03:01:39.091633 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 03:01:39.091639 | orchestrator | 2026-01-05 03:01:39.091645 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-01-05 03:01:39.091652 | orchestrator | Monday 05 January 2026 03:01:34 +0000 (0:00:00.957) 0:02:27.720 ******** 2026-01-05 03:01:39.091664 | orchestrator | ok: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-05 03:01:39.091716 | orchestrator | ok: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-05 03:01:39.091722 | orchestrator | ok: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-05 03:01:39.091731 | orchestrator | 2026-01-05 03:01:39.091741 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-01-05 03:01:39.091748 | orchestrator | Monday 05 January 2026 03:01:38 +0000 (0:00:04.121) 0:02:31.841 ******** 2026-01-05 03:01:39.091760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-05 03:01:40.159013 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:01:40.159134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-05 03:01:40.159149 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:01:40.159168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-05 03:01:40.159229 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:01:40.159235 | orchestrator | 2026-01-05 03:01:40.159241 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-01-05 03:01:40.159246 | orchestrator | Monday 05 January 2026 03:01:39 +0000 (0:00:00.721) 0:02:32.562 ******** 2026-01-05 03:01:40.159252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-05 03:01:40.159259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-05 03:01:40.159266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-05 03:01:40.159275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-05 03:01:40.159281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-05 03:01:40.159287 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:01:40.159292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-05 03:01:40.159297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-05 03:01:40.159301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-05 03:01:40.159306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-05 03:01:40.159321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-05 03:01:40.159325 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:01:40.159329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-05 03:01:40.159338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-05 03:01:50.121811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-05 03:01:50.121904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-05 03:01:50.121915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-05 03:01:50.121922 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:01:50.121929 | orchestrator | 2026-01-05 03:01:50.121936 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-01-05 03:01:50.121942 | orchestrator | Monday 05 January 2026 03:01:40 +0000 (0:00:01.060) 0:02:33.622 ******** 2026-01-05 03:01:50.121948 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:01:50.121954 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:01:50.121960 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:01:50.121965 | orchestrator | 2026-01-05 03:01:50.121971 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-01-05 03:01:50.121977 | orchestrator | Monday 05 January 2026 03:01:41 +0000 (0:00:01.826) 0:02:35.449 ******** 2026-01-05 03:01:50.121982 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:01:50.121987 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:01:50.121993 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:01:50.121998 | orchestrator | 2026-01-05 03:01:50.122004 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-01-05 03:01:50.122010 | orchestrator | Monday 05 January 2026 03:01:44 +0000 (0:00:02.331) 0:02:37.781 ******** 2026-01-05 03:01:50.122050 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:01:50.122069 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:01:50.122075 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:01:50.122080 | orchestrator | 2026-01-05 03:01:50.122085 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-01-05 03:01:50.122091 | orchestrator | Monday 05 January 2026 03:01:44 +0000 (0:00:00.348) 0:02:38.129 ******** 2026-01-05 03:01:50.122097 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:01:50.122102 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:01:50.122108 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:01:50.122113 | orchestrator | 2026-01-05 03:01:50.122118 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-01-05 03:01:50.122140 | orchestrator | Monday 05 January 2026 03:01:45 +0000 (0:00:00.354) 0:02:38.483 ******** 2026-01-05 03:01:50.122146 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 03:01:50.122151 | orchestrator | 2026-01-05 03:01:50.122157 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-01-05 03:01:50.122162 | orchestrator | Monday 05 January 2026 03:01:46 +0000 (0:00:01.325) 0:02:39.809 ******** 2026-01-05 03:01:50.122172 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 03:01:50.122180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 03:01:50.122202 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 03:01:50.122209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 03:01:50.122219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 03:01:50.122230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 03:01:50.122237 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 03:01:50.122248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 03:01:52.051218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 03:01:52.051418 | orchestrator | 2026-01-05 03:01:52.051439 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-01-05 03:01:52.051447 | orchestrator | Monday 05 January 2026 03:01:50 +0000 (0:00:03.775) 0:02:43.585 ******** 2026-01-05 03:01:52.051460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-05 03:01:52.051498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 03:01:52.051527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 03:01:52.051536 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:01:52.051545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-05 03:01:52.051574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 03:01:52.051582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 03:01:52.051595 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:01:52.051605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-05 03:01:52.051613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 03:01:52.051620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 03:01:52.051626 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:01:52.051634 | orchestrator | 2026-01-05 03:01:52.051641 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-01-05 03:01:52.051648 | orchestrator | Monday 05 January 2026 03:01:50 +0000 (0:00:00.666) 0:02:44.251 ******** 2026-01-05 03:01:52.051658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-05 03:01:52.051674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-05 03:02:02.674860 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:02:02.674992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-05 03:02:02.675007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-05 03:02:02.675036 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:02:02.675043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-05 03:02:02.675050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-05 03:02:02.675057 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:02:02.675063 | orchestrator | 2026-01-05 03:02:02.675082 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-01-05 03:02:02.675090 | orchestrator | Monday 05 January 2026 03:01:52 +0000 (0:00:01.257) 0:02:45.509 ******** 2026-01-05 03:02:02.675096 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:02:02.675103 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:02:02.675109 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:02:02.675116 | orchestrator | 2026-01-05 03:02:02.675122 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-01-05 03:02:02.675129 | orchestrator | Monday 05 January 2026 03:01:53 +0000 (0:00:01.409) 0:02:46.918 ******** 2026-01-05 03:02:02.675135 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:02:02.675141 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:02:02.675147 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:02:02.675153 | orchestrator | 2026-01-05 03:02:02.675159 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-01-05 03:02:02.675166 | orchestrator | Monday 05 January 2026 03:01:55 +0000 (0:00:02.300) 0:02:49.219 ******** 2026-01-05 03:02:02.675172 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:02:02.675178 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:02:02.675184 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:02:02.675191 | orchestrator | 2026-01-05 03:02:02.675197 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-01-05 03:02:02.675203 | orchestrator | Monday 05 January 2026 03:01:56 +0000 (0:00:00.638) 0:02:49.857 ******** 2026-01-05 03:02:02.675210 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 03:02:02.675216 | orchestrator | 2026-01-05 03:02:02.675223 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-01-05 03:02:02.675229 | orchestrator | Monday 05 January 2026 03:01:57 +0000 (0:00:01.131) 0:02:50.989 ******** 2026-01-05 03:02:02.675237 | orchestrator | ok: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-05 03:02:02.675249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-05 03:02:02.675315 | orchestrator | ok: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-05 03:02:02.675334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-05 03:02:02.675344 | orchestrator | ok: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-05 03:02:02.675353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-05 03:02:02.675362 | orchestrator | 2026-01-05 03:02:02.675372 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-01-05 03:02:02.675382 | orchestrator | Monday 05 January 2026 03:02:01 +0000 (0:00:04.034) 0:02:55.023 ******** 2026-01-05 03:02:02.675398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-05 03:02:13.352885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-05 03:02:13.352990 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:02:13.353017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-05 03:02:13.353026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-05 03:02:13.353033 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:02:13.353039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-05 03:02:13.353066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-05 03:02:13.353087 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:02:13.353093 | orchestrator | 2026-01-05 03:02:13.353100 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-01-05 03:02:13.353107 | orchestrator | Monday 05 January 2026 03:02:02 +0000 (0:00:01.120) 0:02:56.144 ******** 2026-01-05 03:02:13.353115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-01-05 03:02:13.353124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-01-05 03:02:13.353132 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:02:13.353138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-01-05 03:02:13.353144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-01-05 03:02:13.353151 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:02:13.353161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-01-05 03:02:13.353168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-01-05 03:02:13.353175 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:02:13.353181 | orchestrator | 2026-01-05 03:02:13.353187 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-01-05 03:02:13.353193 | orchestrator | Monday 05 January 2026 03:02:03 +0000 (0:00:00.998) 0:02:57.143 ******** 2026-01-05 03:02:13.353200 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:02:13.353207 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:02:13.353213 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:02:13.353218 | orchestrator | 2026-01-05 03:02:13.353224 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-01-05 03:02:13.353230 | orchestrator | Monday 05 January 2026 03:02:05 +0000 (0:00:02.035) 0:02:59.178 ******** 2026-01-05 03:02:13.353236 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:02:13.353336 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:02:13.353343 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:02:13.353350 | orchestrator | 2026-01-05 03:02:13.353356 | orchestrator | TASK [include_role : manila] *************************************************** 2026-01-05 03:02:13.353363 | orchestrator | Monday 05 January 2026 03:02:07 +0000 (0:00:02.249) 0:03:01.428 ******** 2026-01-05 03:02:13.353370 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 03:02:13.353385 | orchestrator | 2026-01-05 03:02:13.353392 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-01-05 03:02:13.353398 | orchestrator | Monday 05 January 2026 03:02:09 +0000 (0:00:01.164) 0:03:02.593 ******** 2026-01-05 03:02:13.353406 | orchestrator | ok: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-05 03:02:13.353413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 03:02:13.353430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-05 03:02:14.114903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-05 03:02:14.115015 | orchestrator | ok: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-05 03:02:14.115026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 03:02:14.115053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-05 03:02:14.115060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-05 03:02:14.115083 | orchestrator | ok: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-05 03:02:14.115090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 03:02:14.115100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-05 03:02:14.115107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-05 03:02:14.115121 | orchestrator | 2026-01-05 03:02:14.115129 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-01-05 03:02:14.115137 | orchestrator | Monday 05 January 2026 03:02:13 +0000 (0:00:04.225) 0:03:06.819 ******** 2026-01-05 03:02:14.115144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-05 03:02:14.115152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 03:02:14.115162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-05 03:02:19.096182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-05 03:02:19.096296 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:02:19.096318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-05 03:02:19.096342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 03:02:19.096347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-05 03:02:19.096352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-05 03:02:19.096356 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:02:19.096372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-05 03:02:19.096380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 03:02:19.096384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-05 03:02:19.096392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-05 03:02:19.096396 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:02:19.096400 | orchestrator | 2026-01-05 03:02:19.096405 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-01-05 03:02:19.096410 | orchestrator | Monday 05 January 2026 03:02:14 +0000 (0:00:00.770) 0:03:07.590 ******** 2026-01-05 03:02:19.096415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-01-05 03:02:19.096422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-01-05 03:02:19.096435 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:02:19.096439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-01-05 03:02:19.096443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-01-05 03:02:19.096447 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:02:19.096451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-01-05 03:02:19.096455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-01-05 03:02:19.096458 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:02:19.096462 | orchestrator | 2026-01-05 03:02:19.096466 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-01-05 03:02:19.096470 | orchestrator | Monday 05 January 2026 03:02:15 +0000 (0:00:00.994) 0:03:08.584 ******** 2026-01-05 03:02:19.096474 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:02:19.096479 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:02:19.096482 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:02:19.096486 | orchestrator | 2026-01-05 03:02:19.096490 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-01-05 03:02:19.096494 | orchestrator | Monday 05 January 2026 03:02:16 +0000 (0:00:01.752) 0:03:10.337 ******** 2026-01-05 03:02:19.096497 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:02:19.096501 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:02:19.096505 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:02:19.096509 | orchestrator | 2026-01-05 03:02:19.096513 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-01-05 03:02:19.096519 | orchestrator | Monday 05 January 2026 03:02:19 +0000 (0:00:02.228) 0:03:12.566 ******** 2026-01-05 03:02:29.837113 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 03:02:29.837233 | orchestrator | 2026-01-05 03:02:29.837243 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-01-05 03:02:29.837249 | orchestrator | Monday 05 January 2026 03:02:20 +0000 (0:00:01.529) 0:03:14.095 ******** 2026-01-05 03:02:29.837255 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-01-05 03:02:29.837261 | orchestrator | 2026-01-05 03:02:29.837268 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-01-05 03:02:29.837274 | orchestrator | Monday 05 January 2026 03:02:24 +0000 (0:00:03.701) 0:03:17.797 ******** 2026-01-05 03:02:29.837290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 03:02:29.837300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-05 03:02:29.837307 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:02:29.837329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 03:02:29.837340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-05 03:02:29.837346 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:02:29.837352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 03:02:29.837358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-05 03:02:29.837367 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:02:29.837373 | orchestrator | 2026-01-05 03:02:29.837378 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-01-05 03:02:29.837385 | orchestrator | Monday 05 January 2026 03:02:27 +0000 (0:00:02.766) 0:03:20.563 ******** 2026-01-05 03:02:29.837398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 03:02:32.783963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-05 03:02:32.784059 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:02:32.784073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 03:02:32.784105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-05 03:02:32.784112 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:02:32.784786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 03:02:32.784837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-05 03:02:32.784846 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:02:32.784853 | orchestrator | 2026-01-05 03:02:32.784861 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-01-05 03:02:32.784868 | orchestrator | Monday 05 January 2026 03:02:29 +0000 (0:00:02.742) 0:03:23.306 ******** 2026-01-05 03:02:32.784888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-05 03:02:32.784896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-05 03:02:32.784902 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:02:32.784908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-05 03:02:32.784914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-05 03:02:32.784921 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:02:32.784939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-05 03:02:42.282236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-05 03:02:42.282334 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:02:42.282344 | orchestrator | 2026-01-05 03:02:42.282352 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-01-05 03:02:42.282382 | orchestrator | Monday 05 January 2026 03:02:32 +0000 (0:00:02.944) 0:03:26.250 ******** 2026-01-05 03:02:42.282389 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:02:42.282395 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:02:42.282401 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:02:42.282406 | orchestrator | 2026-01-05 03:02:42.282412 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-01-05 03:02:42.282418 | orchestrator | Monday 05 January 2026 03:02:35 +0000 (0:00:02.323) 0:03:28.573 ******** 2026-01-05 03:02:42.282423 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:02:42.282429 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:02:42.282434 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:02:42.282440 | orchestrator | 2026-01-05 03:02:42.282445 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-01-05 03:02:42.282451 | orchestrator | Monday 05 January 2026 03:02:36 +0000 (0:00:01.704) 0:03:30.278 ******** 2026-01-05 03:02:42.282456 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:02:42.282462 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:02:42.282468 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:02:42.282473 | orchestrator | 2026-01-05 03:02:42.282479 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-01-05 03:02:42.282484 | orchestrator | Monday 05 January 2026 03:02:37 +0000 (0:00:00.375) 0:03:30.654 ******** 2026-01-05 03:02:42.282489 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 03:02:42.282495 | orchestrator | 2026-01-05 03:02:42.282500 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-01-05 03:02:42.282506 | orchestrator | Monday 05 January 2026 03:02:38 +0000 (0:00:01.508) 0:03:32.162 ******** 2026-01-05 03:02:42.282513 | orchestrator | ok: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-05 03:02:42.282523 | orchestrator | ok: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-05 03:02:42.282529 | orchestrator | ok: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-05 03:02:42.282540 | orchestrator | 2026-01-05 03:02:42.282559 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-01-05 03:02:42.282566 | orchestrator | Monday 05 January 2026 03:02:40 +0000 (0:00:01.629) 0:03:33.792 ******** 2026-01-05 03:02:42.282578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-05 03:02:42.282584 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:02:42.282590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-05 03:02:42.282595 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:02:42.282601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-05 03:02:42.282607 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:02:42.282612 | orchestrator | 2026-01-05 03:02:42.282618 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-01-05 03:02:42.282623 | orchestrator | Monday 05 January 2026 03:02:40 +0000 (0:00:00.458) 0:03:34.251 ******** 2026-01-05 03:02:42.282630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-05 03:02:42.282636 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:02:42.282642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-05 03:02:42.282648 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:02:42.282653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-05 03:02:42.282663 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:02:42.282669 | orchestrator | 2026-01-05 03:02:42.282674 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-01-05 03:02:42.282679 | orchestrator | Monday 05 January 2026 03:02:41 +0000 (0:00:00.950) 0:03:35.201 ******** 2026-01-05 03:02:42.282685 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:02:42.282691 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:02:42.282696 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:02:42.282702 | orchestrator | 2026-01-05 03:02:42.282707 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-01-05 03:02:42.282716 | orchestrator | Monday 05 January 2026 03:02:42 +0000 (0:00:00.549) 0:03:35.751 ******** 2026-01-05 03:02:50.240858 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:02:50.240984 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:02:50.240995 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:02:50.241000 | orchestrator | 2026-01-05 03:02:50.241006 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-01-05 03:02:50.241024 | orchestrator | Monday 05 January 2026 03:02:43 +0000 (0:00:01.482) 0:03:37.233 ******** 2026-01-05 03:02:50.241057 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:02:50.241062 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:02:50.241067 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:02:50.241071 | orchestrator | 2026-01-05 03:02:50.241076 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-01-05 03:02:50.241081 | orchestrator | Monday 05 January 2026 03:02:44 +0000 (0:00:00.378) 0:03:37.611 ******** 2026-01-05 03:02:50.241085 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 03:02:50.241110 | orchestrator | 2026-01-05 03:02:50.241116 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-01-05 03:02:50.241121 | orchestrator | Monday 05 January 2026 03:02:45 +0000 (0:00:01.660) 0:03:39.272 ******** 2026-01-05 03:02:50.241129 | orchestrator | ok: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-05 03:02:50.241138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-05 03:02:50.241145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-05 03:02:50.241168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-05 03:02:50.241191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-05 03:02:50.241197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-05 03:02:50.241203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 03:02:50.241211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 03:02:50.241217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-05 03:02:50.241228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 03:02:50.241236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-05 03:02:50.428853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-05 03:02:50.428931 | orchestrator | ok: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-05 03:02:50.428939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 03:02:50.428945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-05 03:02:50.428971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-05 03:02:50.428993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-05 03:02:50.428999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-05 03:02:50.429004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-05 03:02:50.429008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-05 03:02:50.429016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-05 03:02:50.429020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-05 03:02:50.429028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 03:02:50.739638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 03:02:50.739743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-05 03:02:50.739759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 03:02:50.739793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-05 03:02:50.739806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-05 03:02:50.739817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 03:02:50.739844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-05 03:02:50.739853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-05 03:02:50.739861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-05 03:02:50.739875 | orchestrator | ok: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-05 03:02:50.739888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-05 03:02:50.739909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-05 03:02:51.014977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-05 03:02:51.015055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-05 03:02:51.015080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-05 03:02:51.015120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 03:02:51.015131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 03:02:51.015144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-05 03:02:51.015191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 03:02:51.015199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-05 03:02:51.015208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-05 03:02:51.015222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 03:02:51.015230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-05 03:02:51.015238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-05 03:02:51.015256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-05 03:02:52.568788 | orchestrator | 2026-01-05 03:02:52.568900 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-01-05 03:02:52.568916 | orchestrator | Monday 05 January 2026 03:02:50 +0000 (0:00:05.207) 0:03:44.479 ******** 2026-01-05 03:02:52.568963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-05 03:02:52.569005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-05 03:02:52.569019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-05 03:02:52.569030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-05 03:02:52.569057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-05 03:02:52.569108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-05 03:02:52.569128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 03:02:52.569142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 03:02:52.569154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-05 03:02:52.569165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 03:02:52.569176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-05 03:02:52.569202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-05 03:02:52.718569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 03:02:52.718683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-05 03:02:52.718701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-05 03:02:52.718717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-05 03:02:52.718729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-05 03:02:52.718761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-05 03:02:52.718781 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:02:52.718796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-05 03:02:52.718809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-05 03:02:52.718914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-05 03:02:52.718939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-05 03:02:52.718952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 03:02:52.718980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 03:02:52.808301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-05 03:02:52.808409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-05 03:02:52.808427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-05 03:02:52.808443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 03:02:52.808475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-05 03:02:52.808532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-05 03:02:52.808549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-05 03:02:52.808565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-05 03:02:52.808580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-05 03:02:52.808596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 03:02:52.808610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-05 03:02:52.808638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-05 03:02:54.658814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 03:02:54.658954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-05 03:02:54.658984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 03:02:54.659005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-05 03:02:54.659026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-05 03:02:54.659046 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:02:54.659136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-05 03:02:54.659194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-05 03:02:54.659208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-05 03:02:54.659218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-05 03:02:54.659229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-05 03:02:54.659240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-05 03:02:54.659263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-05 03:02:54.659274 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:02:54.659284 | orchestrator | 2026-01-05 03:02:54.659295 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-01-05 03:02:54.659306 | orchestrator | Monday 05 January 2026 03:02:53 +0000 (0:00:02.027) 0:03:46.507 ******** 2026-01-05 03:02:54.659322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-01-05 03:03:06.260155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-01-05 03:03:06.260298 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:03:06.260314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-01-05 03:03:06.260327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-01-05 03:03:06.260336 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:03:06.260346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-01-05 03:03:06.260355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-01-05 03:03:06.260364 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:03:06.260377 | orchestrator | 2026-01-05 03:03:06.260399 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-01-05 03:03:06.260422 | orchestrator | Monday 05 January 2026 03:02:54 +0000 (0:00:01.622) 0:03:48.130 ******** 2026-01-05 03:03:06.260435 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:03:06.260450 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:03:06.260464 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:03:06.260477 | orchestrator | 2026-01-05 03:03:06.260490 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-01-05 03:03:06.260503 | orchestrator | Monday 05 January 2026 03:02:56 +0000 (0:00:02.170) 0:03:50.300 ******** 2026-01-05 03:03:06.260517 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:03:06.260531 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:03:06.260545 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:03:06.260559 | orchestrator | 2026-01-05 03:03:06.260572 | orchestrator | TASK [include_role : placement] ************************************************ 2026-01-05 03:03:06.260588 | orchestrator | Monday 05 January 2026 03:02:59 +0000 (0:00:02.385) 0:03:52.686 ******** 2026-01-05 03:03:06.260603 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 03:03:06.260620 | orchestrator | 2026-01-05 03:03:06.260635 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-01-05 03:03:06.260651 | orchestrator | Monday 05 January 2026 03:03:00 +0000 (0:00:01.326) 0:03:54.013 ******** 2026-01-05 03:03:06.260695 | orchestrator | ok: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-05 03:03:06.260739 | orchestrator | ok: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-05 03:03:06.260791 | orchestrator | ok: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-05 03:03:06.260810 | orchestrator | 2026-01-05 03:03:06.260825 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-01-05 03:03:06.260842 | orchestrator | Monday 05 January 2026 03:03:04 +0000 (0:00:04.307) 0:03:58.321 ******** 2026-01-05 03:03:06.260860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-05 03:03:06.260877 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:03:06.260906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-05 03:03:06.260922 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:03:06.260946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-05 03:03:06.260962 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:03:06.260977 | orchestrator | 2026-01-05 03:03:06.260992 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-01-05 03:03:06.261007 | orchestrator | Monday 05 January 2026 03:03:05 +0000 (0:00:00.566) 0:03:58.887 ******** 2026-01-05 03:03:06.261021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-05 03:03:06.261073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-05 03:03:16.847383 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:03:16.847476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-05 03:03:16.847483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-05 03:03:16.847490 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:03:16.847496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-05 03:03:16.847503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-05 03:03:16.847509 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:03:16.847515 | orchestrator | 2026-01-05 03:03:16.847524 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-01-05 03:03:16.847551 | orchestrator | Monday 05 January 2026 03:03:06 +0000 (0:00:00.841) 0:03:59.729 ******** 2026-01-05 03:03:16.847558 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:03:16.847565 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:03:16.847570 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:03:16.847576 | orchestrator | 2026-01-05 03:03:16.847582 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-01-05 03:03:16.847588 | orchestrator | Monday 05 January 2026 03:03:08 +0000 (0:00:01.763) 0:04:01.492 ******** 2026-01-05 03:03:16.847593 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:03:16.847600 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:03:16.847606 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:03:16.847612 | orchestrator | 2026-01-05 03:03:16.847618 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-01-05 03:03:16.847624 | orchestrator | Monday 05 January 2026 03:03:10 +0000 (0:00:02.314) 0:04:03.806 ******** 2026-01-05 03:03:16.847629 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 03:03:16.847635 | orchestrator | 2026-01-05 03:03:16.847641 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-01-05 03:03:16.847648 | orchestrator | Monday 05 January 2026 03:03:11 +0000 (0:00:01.401) 0:04:05.208 ******** 2026-01-05 03:03:16.847660 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-05 03:03:16.847681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 03:03:16.847700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-05 03:03:16.847705 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-05 03:03:16.847714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 03:03:16.847719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-05 03:03:16.847726 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-05 03:03:16.847735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 03:03:17.585402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-05 03:03:17.585506 | orchestrator | 2026-01-05 03:03:17.585524 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-01-05 03:03:17.585537 | orchestrator | Monday 05 January 2026 03:03:16 +0000 (0:00:05.097) 0:04:10.305 ******** 2026-01-05 03:03:17.585555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-05 03:03:17.585569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 03:03:17.585601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-05 03:03:17.585610 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:03:17.585635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-05 03:03:17.585663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 03:03:17.585670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-05 03:03:17.585677 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:03:17.585687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-05 03:03:17.585695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-05 03:03:17.585702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-05 03:03:17.585714 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:03:17.585720 | orchestrator | 2026-01-05 03:03:17.585731 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-01-05 03:03:34.628478 | orchestrator | Monday 05 January 2026 03:03:17 +0000 (0:00:00.735) 0:04:11.041 ******** 2026-01-05 03:03:34.628630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-05 03:03:34.628663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-05 03:03:34.628683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-05 03:03:34.628704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-05 03:03:34.628726 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:03:34.628748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-05 03:03:34.628767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-05 03:03:34.628788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-05 03:03:34.628809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-05 03:03:34.628829 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:03:34.628845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-05 03:03:34.628857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-05 03:03:34.628868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-05 03:03:34.628898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-05 03:03:34.628910 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:03:34.628922 | orchestrator | 2026-01-05 03:03:34.629010 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-01-05 03:03:34.629024 | orchestrator | Monday 05 January 2026 03:03:18 +0000 (0:00:01.015) 0:04:12.056 ******** 2026-01-05 03:03:34.629066 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:03:34.629080 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:03:34.629093 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:03:34.629106 | orchestrator | 2026-01-05 03:03:34.629119 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-01-05 03:03:34.629131 | orchestrator | Monday 05 January 2026 03:03:20 +0000 (0:00:01.975) 0:04:14.031 ******** 2026-01-05 03:03:34.629144 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:03:34.629156 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:03:34.629170 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:03:34.629208 | orchestrator | 2026-01-05 03:03:34.629222 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-01-05 03:03:34.629235 | orchestrator | Monday 05 January 2026 03:03:22 +0000 (0:00:02.367) 0:04:16.399 ******** 2026-01-05 03:03:34.629273 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 03:03:34.629287 | orchestrator | 2026-01-05 03:03:34.629299 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-01-05 03:03:34.629312 | orchestrator | Monday 05 January 2026 03:03:24 +0000 (0:00:01.907) 0:04:18.306 ******** 2026-01-05 03:03:34.629326 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-01-05 03:03:34.629340 | orchestrator | 2026-01-05 03:03:34.629354 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-01-05 03:03:34.629367 | orchestrator | Monday 05 January 2026 03:03:26 +0000 (0:00:01.314) 0:04:19.621 ******** 2026-01-05 03:03:34.629422 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-05 03:03:34.629437 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-05 03:03:34.629450 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-05 03:03:34.629461 | orchestrator | 2026-01-05 03:03:34.629473 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-01-05 03:03:34.629485 | orchestrator | Monday 05 January 2026 03:03:30 +0000 (0:00:04.774) 0:04:24.396 ******** 2026-01-05 03:03:34.629498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-05 03:03:34.629519 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:03:34.629531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-05 03:03:34.629549 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:03:34.629561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-05 03:03:34.629572 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:03:34.629583 | orchestrator | 2026-01-05 03:03:34.629594 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-01-05 03:03:34.629605 | orchestrator | Monday 05 January 2026 03:03:32 +0000 (0:00:02.058) 0:04:26.454 ******** 2026-01-05 03:03:34.629617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-05 03:03:34.629629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-05 03:03:34.629642 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:03:34.629660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-05 03:03:57.119534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-05 03:03:57.119619 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:03:57.119627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-05 03:03:57.119635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-05 03:03:57.119640 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:03:57.119644 | orchestrator | 2026-01-05 03:03:57.119649 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-05 03:03:57.119654 | orchestrator | Monday 05 January 2026 03:03:34 +0000 (0:00:01.642) 0:04:28.097 ******** 2026-01-05 03:03:57.119658 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:03:57.119662 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:03:57.119666 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:03:57.119670 | orchestrator | 2026-01-05 03:03:57.119674 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-05 03:03:57.119678 | orchestrator | Monday 05 January 2026 03:03:38 +0000 (0:00:03.730) 0:04:31.827 ******** 2026-01-05 03:03:57.119682 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:03:57.119701 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:03:57.119705 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:03:57.119708 | orchestrator | 2026-01-05 03:03:57.119712 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-01-05 03:03:57.119716 | orchestrator | Monday 05 January 2026 03:03:41 +0000 (0:00:03.367) 0:04:35.195 ******** 2026-01-05 03:03:57.119721 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-01-05 03:03:57.119726 | orchestrator | 2026-01-05 03:03:57.119730 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-01-05 03:03:57.119735 | orchestrator | Monday 05 January 2026 03:03:43 +0000 (0:00:01.656) 0:04:36.852 ******** 2026-01-05 03:03:57.119741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-05 03:03:57.119750 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:03:57.119771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-05 03:03:57.119781 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:03:57.119788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-05 03:03:57.119795 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:03:57.119801 | orchestrator | 2026-01-05 03:03:57.119807 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-01-05 03:03:57.119813 | orchestrator | Monday 05 January 2026 03:03:44 +0000 (0:00:01.397) 0:04:38.249 ******** 2026-01-05 03:03:57.119836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-05 03:03:57.119842 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:03:57.119848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-05 03:03:57.119942 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:03:57.119947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-05 03:03:57.119953 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:03:57.119958 | orchestrator | 2026-01-05 03:03:57.119964 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-01-05 03:03:57.119969 | orchestrator | Monday 05 January 2026 03:03:46 +0000 (0:00:01.515) 0:04:39.764 ******** 2026-01-05 03:03:57.119974 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:03:57.119980 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:03:57.119985 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:03:57.119991 | orchestrator | 2026-01-05 03:03:57.119997 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-05 03:03:57.120002 | orchestrator | Monday 05 January 2026 03:03:48 +0000 (0:00:02.170) 0:04:41.935 ******** 2026-01-05 03:03:57.120008 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:03:57.120013 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:03:57.120018 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:03:57.120025 | orchestrator | 2026-01-05 03:03:57.120030 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-05 03:03:57.120036 | orchestrator | Monday 05 January 2026 03:03:51 +0000 (0:00:02.652) 0:04:44.587 ******** 2026-01-05 03:03:57.120042 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:03:57.120048 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:03:57.120054 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:03:57.120060 | orchestrator | 2026-01-05 03:03:57.120066 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-01-05 03:03:57.120072 | orchestrator | Monday 05 January 2026 03:03:54 +0000 (0:00:03.445) 0:04:48.032 ******** 2026-01-05 03:03:57.120079 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-01-05 03:03:57.120085 | orchestrator | 2026-01-05 03:03:57.120091 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-01-05 03:03:57.120105 | orchestrator | Monday 05 January 2026 03:03:55 +0000 (0:00:00.948) 0:04:48.981 ******** 2026-01-05 03:03:57.120110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-05 03:03:57.120115 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:03:57.120118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-05 03:03:57.120122 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:03:57.120140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-05 03:04:12.695998 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:04:12.696107 | orchestrator | 2026-01-05 03:04:12.696119 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-01-05 03:04:12.696126 | orchestrator | Monday 05 January 2026 03:03:57 +0000 (0:00:01.607) 0:04:50.588 ******** 2026-01-05 03:04:12.696135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-05 03:04:12.696143 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:04:12.696149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-05 03:04:12.696155 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:04:12.696160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-05 03:04:12.696165 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:04:12.696171 | orchestrator | 2026-01-05 03:04:12.696176 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-01-05 03:04:12.696182 | orchestrator | Monday 05 January 2026 03:03:58 +0000 (0:00:01.493) 0:04:52.082 ******** 2026-01-05 03:04:12.696187 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:04:12.696192 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:04:12.696197 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:04:12.696202 | orchestrator | 2026-01-05 03:04:12.696208 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-05 03:04:12.696227 | orchestrator | Monday 05 January 2026 03:04:00 +0000 (0:00:02.106) 0:04:54.188 ******** 2026-01-05 03:04:12.696232 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:04:12.696238 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:04:12.696244 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:04:12.696249 | orchestrator | 2026-01-05 03:04:12.696254 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-05 03:04:12.696259 | orchestrator | Monday 05 January 2026 03:04:03 +0000 (0:00:02.748) 0:04:56.937 ******** 2026-01-05 03:04:12.696265 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:04:12.696270 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:04:12.696275 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:04:12.696298 | orchestrator | 2026-01-05 03:04:12.696304 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-01-05 03:04:12.696309 | orchestrator | Monday 05 January 2026 03:04:07 +0000 (0:00:03.690) 0:05:00.627 ******** 2026-01-05 03:04:12.696314 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 03:04:12.696319 | orchestrator | 2026-01-05 03:04:12.696324 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-01-05 03:04:12.696329 | orchestrator | Monday 05 January 2026 03:04:08 +0000 (0:00:01.679) 0:05:02.307 ******** 2026-01-05 03:04:12.696336 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-05 03:04:12.696357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-05 03:04:12.696365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-05 03:04:12.696371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-05 03:04:12.696378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-05 03:04:12.696458 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-05 03:04:12.696523 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-05 03:04:13.532795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-05 03:04:13.532906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-05 03:04:13.532919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-05 03:04:13.532928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-05 03:04:13.532973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-05 03:04:13.532984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-05 03:04:13.532992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-05 03:04:13.533015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-05 03:04:13.533024 | orchestrator | 2026-01-05 03:04:13.533033 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-01-05 03:04:13.533042 | orchestrator | Monday 05 January 2026 03:04:12 +0000 (0:00:03.990) 0:05:06.297 ******** 2026-01-05 03:04:13.533051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-05 03:04:13.533061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-05 03:04:13.533081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-05 03:04:13.533090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-05 03:04:13.533098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-05 03:04:13.533105 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:04:13.533120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-05 03:04:26.781250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-05 03:04:26.781377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-05 03:04:26.781430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-05 03:04:26.781442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-05 03:04:26.781449 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:04:26.781458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-05 03:04:26.781466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-05 03:04:26.781492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-05 03:04:26.781500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-05 03:04:26.781518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-05 03:04:26.781526 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:04:26.781532 | orchestrator | 2026-01-05 03:04:26.781539 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-01-05 03:04:26.781547 | orchestrator | Monday 05 January 2026 03:04:13 +0000 (0:00:00.853) 0:05:07.151 ******** 2026-01-05 03:04:26.781555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-05 03:04:26.781564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-05 03:04:26.781572 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:04:26.781578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-05 03:04:26.781584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-05 03:04:26.781591 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:04:26.781597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-05 03:04:26.781604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-05 03:04:26.781610 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:04:26.781614 | orchestrator | 2026-01-05 03:04:26.781618 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-01-05 03:04:26.781621 | orchestrator | Monday 05 January 2026 03:04:14 +0000 (0:00:01.318) 0:05:08.470 ******** 2026-01-05 03:04:26.781625 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:04:26.781630 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:04:26.781634 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:04:26.781637 | orchestrator | 2026-01-05 03:04:26.781641 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-01-05 03:04:26.781645 | orchestrator | Monday 05 January 2026 03:04:16 +0000 (0:00:01.530) 0:05:10.000 ******** 2026-01-05 03:04:26.781649 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:04:26.781653 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:04:26.781657 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:04:26.781660 | orchestrator | 2026-01-05 03:04:26.781664 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-01-05 03:04:26.781668 | orchestrator | Monday 05 January 2026 03:04:18 +0000 (0:00:02.342) 0:05:12.343 ******** 2026-01-05 03:04:26.781672 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 03:04:26.781683 | orchestrator | 2026-01-05 03:04:26.781687 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-01-05 03:04:26.781739 | orchestrator | Monday 05 January 2026 03:04:20 +0000 (0:00:01.833) 0:05:14.176 ******** 2026-01-05 03:04:26.781752 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-05 03:04:27.539468 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-05 03:04:27.539568 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-05 03:04:27.539581 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-05 03:04:27.539590 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-05 03:04:27.539638 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-05 03:04:27.539648 | orchestrator | 2026-01-05 03:04:27.539656 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-01-05 03:04:27.539664 | orchestrator | Monday 05 January 2026 03:04:26 +0000 (0:00:06.074) 0:05:20.251 ******** 2026-01-05 03:04:27.539671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-05 03:04:27.539678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-05 03:04:27.539691 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:04:27.539699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-05 03:04:27.539712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-05 03:04:35.955331 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:04:35.955421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-05 03:04:35.955431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-05 03:04:35.955456 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:04:35.955464 | orchestrator | 2026-01-05 03:04:35.955473 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-01-05 03:04:35.955482 | orchestrator | Monday 05 January 2026 03:04:27 +0000 (0:00:00.758) 0:05:21.009 ******** 2026-01-05 03:04:35.955492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-01-05 03:04:35.955498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-05 03:04:35.955506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-05 03:04:35.955512 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:04:35.955516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-01-05 03:04:35.955520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-05 03:04:35.955525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-05 03:04:35.955529 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:04:35.955533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-01-05 03:04:35.955537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-05 03:04:35.955557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-05 03:04:35.955562 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:04:35.955566 | orchestrator | 2026-01-05 03:04:35.955571 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-01-05 03:04:35.955575 | orchestrator | Monday 05 January 2026 03:04:29 +0000 (0:00:01.856) 0:05:22.866 ******** 2026-01-05 03:04:35.955579 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:04:35.955583 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:04:35.955587 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:04:35.955591 | orchestrator | 2026-01-05 03:04:35.955595 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-01-05 03:04:35.955599 | orchestrator | Monday 05 January 2026 03:04:29 +0000 (0:00:00.497) 0:05:23.363 ******** 2026-01-05 03:04:35.955604 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:04:35.955608 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:04:35.955612 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:04:35.955616 | orchestrator | 2026-01-05 03:04:35.955620 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-01-05 03:04:35.955624 | orchestrator | Monday 05 January 2026 03:04:31 +0000 (0:00:01.502) 0:05:24.866 ******** 2026-01-05 03:04:35.955633 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 03:04:35.955638 | orchestrator | 2026-01-05 03:04:35.955642 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-01-05 03:04:35.955647 | orchestrator | Monday 05 January 2026 03:04:33 +0000 (0:00:01.841) 0:05:26.707 ******** 2026-01-05 03:04:35.955652 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-05 03:04:35.955659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 03:04:35.955666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 03:04:35.955672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 03:04:35.955677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 03:04:35.955690 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-05 03:04:37.746250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 03:04:37.746362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 03:04:37.746379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 03:04:37.746394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 03:04:37.746406 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-05 03:04:37.746420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 03:04:37.746450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 03:04:37.746481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 03:04:37.746518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 03:04:37.746531 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-05 03:04:37.746549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-05 03:04:37.746566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 03:04:37.746605 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-05 03:04:38.539813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 03:04:38.539929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-05 03:04:38.539943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-05 03:04:38.539952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 03:04:38.539961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 03:04:38.539969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-05 03:04:38.540017 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-05 03:04:38.540050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-05 03:04:38.540058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 03:04:38.540065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 03:04:38.540073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-05 03:04:38.540081 | orchestrator | 2026-01-05 03:04:38.540090 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-01-05 03:04:38.540098 | orchestrator | Monday 05 January 2026 03:04:37 +0000 (0:00:04.678) 0:05:31.386 ******** 2026-01-05 03:04:38.540110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-05 03:04:38.540124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 03:04:38.540140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 03:04:38.689379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 03:04:38.689490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 03:04:38.689507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-05 03:04:38.689524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-05 03:04:38.689578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 03:04:38.689590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 03:04:38.689621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-05 03:04:38.689634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-05 03:04:38.689645 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:04:38.689658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 03:04:38.689670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 03:04:38.689680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 03:04:38.689708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 03:04:38.689749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-05 03:04:38.845671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-05 03:04:38.845818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 03:04:38.845829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-05 03:04:38.845855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 03:04:38.845873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-05 03:04:38.845879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-05 03:04:38.845885 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:04:38.845906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 03:04:38.845912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 03:04:38.845919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-05 03:04:38.845925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-05 03:04:38.845945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-05 03:04:38.845951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 03:04:38.845956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-05 03:04:38.845967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-05 03:04:47.327674 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:04:47.327810 | orchestrator | 2026-01-05 03:04:47.327825 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-01-05 03:04:47.327835 | orchestrator | Monday 05 January 2026 03:04:38 +0000 (0:00:00.932) 0:05:32.318 ******** 2026-01-05 03:04:47.327845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-01-05 03:04:47.327858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-01-05 03:04:47.327871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-05 03:04:47.327884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-05 03:04:47.327930 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:04:47.327940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-01-05 03:04:47.327949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-01-05 03:04:47.327959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-05 03:04:47.327968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-05 03:04:47.327977 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:04:47.327986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-01-05 03:04:47.327995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-01-05 03:04:47.328004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-05 03:04:47.328013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-05 03:04:47.328023 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:04:47.328032 | orchestrator | 2026-01-05 03:04:47.328041 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-01-05 03:04:47.328050 | orchestrator | Monday 05 January 2026 03:04:40 +0000 (0:00:01.572) 0:05:33.890 ******** 2026-01-05 03:04:47.328059 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:04:47.328068 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:04:47.328076 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:04:47.328085 | orchestrator | 2026-01-05 03:04:47.328094 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-01-05 03:04:47.328102 | orchestrator | Monday 05 January 2026 03:04:40 +0000 (0:00:00.494) 0:05:34.385 ******** 2026-01-05 03:04:47.328111 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:04:47.328120 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:04:47.328129 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:04:47.328137 | orchestrator | 2026-01-05 03:04:47.328146 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-01-05 03:04:47.328171 | orchestrator | Monday 05 January 2026 03:04:42 +0000 (0:00:01.549) 0:05:35.934 ******** 2026-01-05 03:04:47.328182 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 03:04:47.328193 | orchestrator | 2026-01-05 03:04:47.328203 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-01-05 03:04:47.328214 | orchestrator | Monday 05 January 2026 03:04:44 +0000 (0:00:01.919) 0:05:37.854 ******** 2026-01-05 03:04:47.328228 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-05 03:04:47.328337 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-05 03:04:47.328361 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-05 03:04:47.328372 | orchestrator | 2026-01-05 03:04:47.328383 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-01-05 03:04:47.328395 | orchestrator | Monday 05 January 2026 03:04:47 +0000 (0:00:02.734) 0:05:40.588 ******** 2026-01-05 03:04:47.328415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-05 03:05:00.484821 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:05:00.484907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-05 03:05:00.484921 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:05:00.484928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-05 03:05:00.484935 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:05:00.484940 | orchestrator | 2026-01-05 03:05:00.484947 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-01-05 03:05:00.484958 | orchestrator | Monday 05 January 2026 03:04:47 +0000 (0:00:00.769) 0:05:41.358 ******** 2026-01-05 03:05:00.485011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-05 03:05:00.485021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-05 03:05:00.485027 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:05:00.485035 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:05:00.485041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-05 03:05:00.485048 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:05:00.485054 | orchestrator | 2026-01-05 03:05:00.485061 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-01-05 03:05:00.485067 | orchestrator | Monday 05 January 2026 03:04:48 +0000 (0:00:00.845) 0:05:42.204 ******** 2026-01-05 03:05:00.485074 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:05:00.485081 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:05:00.485087 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:05:00.485094 | orchestrator | 2026-01-05 03:05:00.485102 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-01-05 03:05:00.485107 | orchestrator | Monday 05 January 2026 03:04:49 +0000 (0:00:00.493) 0:05:42.697 ******** 2026-01-05 03:05:00.485111 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:05:00.485132 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:05:00.485136 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:05:00.485140 | orchestrator | 2026-01-05 03:05:00.485144 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-01-05 03:05:00.485148 | orchestrator | Monday 05 January 2026 03:04:51 +0000 (0:00:01.988) 0:05:44.687 ******** 2026-01-05 03:05:00.485152 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 03:05:00.485156 | orchestrator | 2026-01-05 03:05:00.485159 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-01-05 03:05:00.485163 | orchestrator | Monday 05 January 2026 03:04:53 +0000 (0:00:01.984) 0:05:46.672 ******** 2026-01-05 03:05:00.485181 | orchestrator | ok: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-05 03:05:00.485188 | orchestrator | ok: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-05 03:05:00.485197 | orchestrator | ok: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-05 03:05:00.485202 | orchestrator | ok: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-05 03:05:00.485218 | orchestrator | ok: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-05 03:05:02.750720 | orchestrator | ok: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-05 03:05:02.750803 | orchestrator | 2026-01-05 03:05:02.750811 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-01-05 03:05:02.750816 | orchestrator | Monday 05 January 2026 03:05:00 +0000 (0:00:07.280) 0:05:53.952 ******** 2026-01-05 03:05:02.750822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-01-05 03:05:02.750842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-01-05 03:05:02.750865 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:05:02.750871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-01-05 03:05:02.750890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-01-05 03:05:02.750894 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:05:02.750899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-01-05 03:05:02.750906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-01-05 03:05:02.750914 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:05:02.750918 | orchestrator | 2026-01-05 03:05:02.750923 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-01-05 03:05:02.750927 | orchestrator | Monday 05 January 2026 03:05:01 +0000 (0:00:01.173) 0:05:55.126 ******** 2026-01-05 03:05:02.750933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-05 03:05:02.750941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-05 03:05:02.750947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-05 03:05:02.750953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-05 03:05:02.750957 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:05:02.750961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-05 03:05:02.750969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-05 03:06:33.278209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-05 03:06:33.278308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-05 03:06:33.278321 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:06:33.278330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-05 03:06:33.278337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-05 03:06:33.278344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-05 03:06:33.278350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-05 03:06:33.278356 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:06:33.278363 | orchestrator | 2026-01-05 03:06:33.278370 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-01-05 03:06:33.278378 | orchestrator | Monday 05 January 2026 03:05:02 +0000 (0:00:01.088) 0:05:56.214 ******** 2026-01-05 03:06:33.278407 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:06:33.278415 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:06:33.278422 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:06:33.278429 | orchestrator | 2026-01-05 03:06:33.278435 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-01-05 03:06:33.278466 | orchestrator | Monday 05 January 2026 03:05:04 +0000 (0:00:01.428) 0:05:57.643 ******** 2026-01-05 03:06:33.278472 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:06:33.278478 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:06:33.278496 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:06:33.278502 | orchestrator | 2026-01-05 03:06:33.278508 | orchestrator | TASK [include_role : swift] **************************************************** 2026-01-05 03:06:33.278513 | orchestrator | Monday 05 January 2026 03:05:07 +0000 (0:00:02.870) 0:06:00.514 ******** 2026-01-05 03:06:33.278519 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:06:33.278525 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:06:33.278530 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:06:33.278537 | orchestrator | 2026-01-05 03:06:33.278544 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-01-05 03:06:33.278550 | orchestrator | Monday 05 January 2026 03:05:07 +0000 (0:00:00.377) 0:06:00.891 ******** 2026-01-05 03:06:33.278556 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:06:33.278571 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:06:33.278577 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:06:33.278583 | orchestrator | 2026-01-05 03:06:33.278588 | orchestrator | TASK [include_role : trove] **************************************************** 2026-01-05 03:06:33.278594 | orchestrator | Monday 05 January 2026 03:05:07 +0000 (0:00:00.400) 0:06:01.292 ******** 2026-01-05 03:06:33.278600 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:06:33.278606 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:06:33.278611 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:06:33.278618 | orchestrator | 2026-01-05 03:06:33.278625 | orchestrator | TASK [include_role : venus] **************************************************** 2026-01-05 03:06:33.278632 | orchestrator | Monday 05 January 2026 03:05:08 +0000 (0:00:00.377) 0:06:01.669 ******** 2026-01-05 03:06:33.278639 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:06:33.278645 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:06:33.278651 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:06:33.278657 | orchestrator | 2026-01-05 03:06:33.278663 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-01-05 03:06:33.278668 | orchestrator | Monday 05 January 2026 03:05:08 +0000 (0:00:00.712) 0:06:02.381 ******** 2026-01-05 03:06:33.278674 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:06:33.278680 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:06:33.278686 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:06:33.278691 | orchestrator | 2026-01-05 03:06:33.278697 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-01-05 03:06:33.278703 | orchestrator | Monday 05 January 2026 03:05:09 +0000 (0:00:00.364) 0:06:02.746 ******** 2026-01-05 03:06:33.278709 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:06:33.278715 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-01-05 03:06:33.278722 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-01-05 03:06:33.278734 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:06:33.278741 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:06:33.278747 | orchestrator | 2026-01-05 03:06:33.278753 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-01-05 03:06:33.278758 | orchestrator | Monday 05 January 2026 03:05:09 +0000 (0:00:00.594) 0:06:03.340 ******** 2026-01-05 03:06:33.278764 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:06:33.278770 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:06:33.278794 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:06:33.278811 | orchestrator | 2026-01-05 03:06:33.278818 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-01-05 03:06:33.278824 | orchestrator | Monday 05 January 2026 03:05:11 +0000 (0:00:01.252) 0:06:04.593 ******** 2026-01-05 03:06:33.278831 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:06:33.278838 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:06:33.278844 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:06:33.278851 | orchestrator | 2026-01-05 03:06:33.278858 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-01-05 03:06:33.278864 | orchestrator | Monday 05 January 2026 03:05:11 +0000 (0:00:00.393) 0:06:04.986 ******** 2026-01-05 03:06:33.278871 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:06:33.278877 | orchestrator | changed: [testbed-node-1] 2026-01-05 03:06:33.278888 | orchestrator | changed: [testbed-node-2] 2026-01-05 03:06:33.278900 | orchestrator | 2026-01-05 03:06:33.278911 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-01-05 03:06:33.278919 | orchestrator | Monday 05 January 2026 03:05:17 +0000 (0:00:06.148) 0:06:11.135 ******** 2026-01-05 03:06:33.278926 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:06:33.278933 | orchestrator | changed: [testbed-node-1] 2026-01-05 03:06:33.278939 | orchestrator | changed: [testbed-node-2] 2026-01-05 03:06:33.278945 | orchestrator | 2026-01-05 03:06:33.278951 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-01-05 03:06:33.278959 | orchestrator | Monday 05 January 2026 03:05:24 +0000 (0:00:06.488) 0:06:17.624 ******** 2026-01-05 03:06:33.278968 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:06:33.278974 | orchestrator | changed: [testbed-node-1] 2026-01-05 03:06:33.278980 | orchestrator | changed: [testbed-node-2] 2026-01-05 03:06:33.278987 | orchestrator | 2026-01-05 03:06:33.278994 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-01-05 03:06:33.279000 | orchestrator | Monday 05 January 2026 03:05:30 +0000 (0:00:06.158) 0:06:23.782 ******** 2026-01-05 03:06:33.279007 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:06:33.279014 | orchestrator | changed: [testbed-node-1] 2026-01-05 03:06:33.279019 | orchestrator | changed: [testbed-node-2] 2026-01-05 03:06:33.279026 | orchestrator | 2026-01-05 03:06:33.279032 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-01-05 03:06:33.279038 | orchestrator | Monday 05 January 2026 03:05:37 +0000 (0:00:07.098) 0:06:30.881 ******** 2026-01-05 03:06:33.279044 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:06:33.279050 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:06:33.279056 | orchestrator | 2026-01-05 03:06:33.279061 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-01-05 03:06:33.279068 | orchestrator | Monday 05 January 2026 03:05:40 +0000 (0:00:02.728) 0:06:33.609 ******** 2026-01-05 03:06:33.279074 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:06:33.279081 | orchestrator | changed: [testbed-node-2] 2026-01-05 03:06:33.279087 | orchestrator | changed: [testbed-node-1] 2026-01-05 03:06:33.279094 | orchestrator | 2026-01-05 03:06:33.279108 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-01-05 03:06:33.279115 | orchestrator | Monday 05 January 2026 03:05:52 +0000 (0:00:12.151) 0:06:45.761 ******** 2026-01-05 03:06:33.279121 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:06:33.279127 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:06:33.279132 | orchestrator | 2026-01-05 03:06:33.279138 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-01-05 03:06:33.279144 | orchestrator | Monday 05 January 2026 03:05:55 +0000 (0:00:03.702) 0:06:49.464 ******** 2026-01-05 03:06:33.279150 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:06:33.279156 | orchestrator | changed: [testbed-node-1] 2026-01-05 03:06:33.279162 | orchestrator | changed: [testbed-node-2] 2026-01-05 03:06:33.279168 | orchestrator | 2026-01-05 03:06:33.279174 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-01-05 03:06:33.279180 | orchestrator | Monday 05 January 2026 03:06:02 +0000 (0:00:06.485) 0:06:55.949 ******** 2026-01-05 03:06:33.279193 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:06:33.279200 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:06:33.279206 | orchestrator | changed: [testbed-node-0] 2026-01-05 03:06:33.279212 | orchestrator | 2026-01-05 03:06:33.279218 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-01-05 03:06:33.279224 | orchestrator | Monday 05 January 2026 03:06:08 +0000 (0:00:05.885) 0:07:01.835 ******** 2026-01-05 03:06:33.279232 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:06:33.279237 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:06:33.279243 | orchestrator | changed: [testbed-node-0] 2026-01-05 03:06:33.279249 | orchestrator | 2026-01-05 03:06:33.279255 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-01-05 03:06:33.279265 | orchestrator | Monday 05 January 2026 03:06:14 +0000 (0:00:05.902) 0:07:07.738 ******** 2026-01-05 03:06:33.279272 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:06:33.279280 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:06:33.279287 | orchestrator | changed: [testbed-node-0] 2026-01-05 03:06:33.279294 | orchestrator | 2026-01-05 03:06:33.279299 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-01-05 03:06:33.279305 | orchestrator | Monday 05 January 2026 03:06:20 +0000 (0:00:05.932) 0:07:13.670 ******** 2026-01-05 03:06:33.279311 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:06:33.279317 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:06:33.279323 | orchestrator | changed: [testbed-node-0] 2026-01-05 03:06:33.279329 | orchestrator | 2026-01-05 03:06:33.279335 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for master haproxy to start] ************** 2026-01-05 03:06:33.279340 | orchestrator | Monday 05 January 2026 03:06:25 +0000 (0:00:05.247) 0:07:18.918 ******** 2026-01-05 03:06:33.279347 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:06:33.279353 | orchestrator | 2026-01-05 03:06:33.279359 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-01-05 03:06:33.279364 | orchestrator | Monday 05 January 2026 03:06:28 +0000 (0:00:02.676) 0:07:21.594 ******** 2026-01-05 03:06:33.279369 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:06:33.279375 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:06:33.279382 | orchestrator | changed: [testbed-node-0] 2026-01-05 03:06:33.279388 | orchestrator | 2026-01-05 03:06:33.279394 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for master proxysql to start] ************* 2026-01-05 03:06:33.279411 | orchestrator | Monday 05 January 2026 03:06:33 +0000 (0:00:05.148) 0:07:26.743 ******** 2026-01-05 03:06:46.703353 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:06:46.703503 | orchestrator | 2026-01-05 03:06:46.703522 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-01-05 03:06:46.703534 | orchestrator | Monday 05 January 2026 03:06:37 +0000 (0:00:04.644) 0:07:31.387 ******** 2026-01-05 03:06:46.703545 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:06:46.703558 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:06:46.703569 | orchestrator | changed: [testbed-node-0] 2026-01-05 03:06:46.703580 | orchestrator | 2026-01-05 03:06:46.703591 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-01-05 03:06:46.703598 | orchestrator | Monday 05 January 2026 03:06:43 +0000 (0:00:05.273) 0:07:36.661 ******** 2026-01-05 03:06:46.703605 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:06:46.703613 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:06:46.703619 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:06:46.703625 | orchestrator | 2026-01-05 03:06:46.703632 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-01-05 03:06:46.703638 | orchestrator | Monday 05 January 2026 03:06:44 +0000 (0:00:01.044) 0:07:37.706 ******** 2026-01-05 03:06:46.703658 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:06:46.703679 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:06:46.703695 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:06:46.703705 | orchestrator | 2026-01-05 03:06:46.703716 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 03:06:46.703753 | orchestrator | testbed-node-0 : ok=125  changed=9  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-01-05 03:06:46.703765 | orchestrator | testbed-node-1 : ok=124  changed=8  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-01-05 03:06:46.703774 | orchestrator | testbed-node-2 : ok=124  changed=8  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-01-05 03:06:46.703783 | orchestrator | 2026-01-05 03:06:46.703792 | orchestrator | 2026-01-05 03:06:46.703802 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 03:06:46.703812 | orchestrator | Monday 05 January 2026 03:06:46 +0000 (0:00:01.888) 0:07:39.595 ******** 2026-01-05 03:06:46.703821 | orchestrator | =============================================================================== 2026-01-05 03:06:46.703831 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 12.15s 2026-01-05 03:06:46.703841 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 7.28s 2026-01-05 03:06:46.703850 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 7.10s 2026-01-05 03:06:46.703874 | orchestrator | loadbalancer : Stop backup haproxy container ---------------------------- 6.49s 2026-01-05 03:06:46.703884 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 6.49s 2026-01-05 03:06:46.703894 | orchestrator | loadbalancer : Stop backup proxysql container --------------------------- 6.16s 2026-01-05 03:06:46.703903 | orchestrator | loadbalancer : Stop backup keepalived container ------------------------- 6.15s 2026-01-05 03:06:46.703914 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 6.07s 2026-01-05 03:06:46.703926 | orchestrator | loadbalancer : Stop master keepalived container ------------------------- 5.93s 2026-01-05 03:06:46.703937 | orchestrator | loadbalancer : Stop master proxysql container --------------------------- 5.90s 2026-01-05 03:06:46.703947 | orchestrator | loadbalancer : Stop master haproxy container ---------------------------- 5.89s 2026-01-05 03:06:46.703958 | orchestrator | loadbalancer : Start master keepalived container ------------------------ 5.27s 2026-01-05 03:06:46.703968 | orchestrator | loadbalancer : Start master haproxy container --------------------------- 5.25s 2026-01-05 03:06:46.703978 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 5.21s 2026-01-05 03:06:46.703988 | orchestrator | loadbalancer : Start master proxysql container -------------------------- 5.15s 2026-01-05 03:06:46.703999 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 5.10s 2026-01-05 03:06:46.704009 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.77s 2026-01-05 03:06:46.704019 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.68s 2026-01-05 03:06:46.704030 | orchestrator | loadbalancer : Wait for master proxysql to start ------------------------ 4.64s 2026-01-05 03:06:46.704040 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.51s 2026-01-05 03:06:46.922089 | orchestrator | + osism apply -a upgrade opensearch 2026-01-05 03:06:48.727704 | orchestrator | 2026-01-05 03:06:48 | INFO  | Task e6cdf331-d64d-4f16-94fe-17c9ccf0fff6 (opensearch) was prepared for execution. 2026-01-05 03:06:48.727778 | orchestrator | 2026-01-05 03:06:48 | INFO  | It takes a moment until task e6cdf331-d64d-4f16-94fe-17c9ccf0fff6 (opensearch) has been started and output is visible here. 2026-01-05 03:07:11.877530 | orchestrator | 2026-01-05 03:07:11.877637 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 03:07:11.877646 | orchestrator | 2026-01-05 03:07:11.877651 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 03:07:11.877656 | orchestrator | Monday 05 January 2026 03:06:54 +0000 (0:00:01.660) 0:00:01.660 ******** 2026-01-05 03:07:11.877660 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:07:11.877666 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:07:11.877688 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:07:11.877693 | orchestrator | 2026-01-05 03:07:11.877697 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 03:07:11.877702 | orchestrator | Monday 05 January 2026 03:06:56 +0000 (0:00:01.891) 0:00:03.552 ******** 2026-01-05 03:07:11.877707 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-01-05 03:07:11.877712 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-01-05 03:07:11.877716 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-01-05 03:07:11.877720 | orchestrator | 2026-01-05 03:07:11.877725 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-01-05 03:07:11.877729 | orchestrator | 2026-01-05 03:07:11.877733 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-05 03:07:11.877737 | orchestrator | Monday 05 January 2026 03:06:58 +0000 (0:00:01.866) 0:00:05.419 ******** 2026-01-05 03:07:11.877742 | orchestrator | included: /ansible/roles/opensearch/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 03:07:11.877746 | orchestrator | 2026-01-05 03:07:11.877750 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-01-05 03:07:11.877754 | orchestrator | Monday 05 January 2026 03:07:01 +0000 (0:00:03.012) 0:00:08.431 ******** 2026-01-05 03:07:11.877759 | orchestrator | ok: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-05 03:07:11.877763 | orchestrator | ok: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-05 03:07:11.877767 | orchestrator | ok: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-05 03:07:11.877771 | orchestrator | 2026-01-05 03:07:11.877775 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-01-05 03:07:11.877779 | orchestrator | Monday 05 January 2026 03:07:03 +0000 (0:00:02.418) 0:00:10.850 ******** 2026-01-05 03:07:11.877798 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-05 03:07:11.877806 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-05 03:07:11.877822 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-05 03:07:11.877835 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-05 03:07:11.877845 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-05 03:07:11.877862 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-05 03:07:11.877869 | orchestrator | 2026-01-05 03:07:11.877875 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-05 03:07:11.877882 | orchestrator | Monday 05 January 2026 03:07:06 +0000 (0:00:02.524) 0:00:13.374 ******** 2026-01-05 03:07:11.877888 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 03:07:11.877903 | orchestrator | 2026-01-05 03:07:11.877909 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-01-05 03:07:11.877915 | orchestrator | Monday 05 January 2026 03:07:08 +0000 (0:00:01.697) 0:00:15.072 ******** 2026-01-05 03:07:11.877928 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-05 03:07:13.570140 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-05 03:07:13.570230 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-05 03:07:13.570258 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-05 03:07:13.570268 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-05 03:07:13.570310 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-05 03:07:13.570319 | orchestrator | 2026-01-05 03:07:13.570327 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-01-05 03:07:13.570335 | orchestrator | Monday 05 January 2026 03:07:11 +0000 (0:00:03.772) 0:00:18.845 ******** 2026-01-05 03:07:13.570342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-05 03:07:13.570353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-05 03:07:13.570445 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:07:13.570456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-05 03:07:13.570470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-05 03:07:19.011684 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:07:19.011814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-05 03:07:19.011856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-05 03:07:19.011896 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:07:19.011910 | orchestrator | 2026-01-05 03:07:19.011922 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-01-05 03:07:19.011935 | orchestrator | Monday 05 January 2026 03:07:13 +0000 (0:00:01.697) 0:00:20.543 ******** 2026-01-05 03:07:19.011946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-05 03:07:19.011958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-05 03:07:19.011971 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:07:19.012005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-05 03:07:19.012026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-05 03:07:19.012051 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:07:19.012064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-05 03:07:19.012076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-05 03:07:19.012089 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:07:19.012101 | orchestrator | 2026-01-05 03:07:19.012113 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-01-05 03:07:19.012124 | orchestrator | Monday 05 January 2026 03:07:15 +0000 (0:00:01.685) 0:00:22.228 ******** 2026-01-05 03:07:19.012145 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-05 03:07:29.804060 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-05 03:07:29.804169 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-05 03:07:29.804178 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-05 03:07:29.804184 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-05 03:07:29.804200 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-05 03:07:29.804209 | orchestrator | 2026-01-05 03:07:29.804219 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-01-05 03:07:29.804224 | orchestrator | Monday 05 January 2026 03:07:19 +0000 (0:00:03.758) 0:00:25.986 ******** 2026-01-05 03:07:29.804228 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:07:29.804233 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:07:29.804236 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:07:29.804240 | orchestrator | 2026-01-05 03:07:29.804244 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-01-05 03:07:29.804248 | orchestrator | Monday 05 January 2026 03:07:22 +0000 (0:00:03.529) 0:00:29.516 ******** 2026-01-05 03:07:29.804252 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:07:29.804256 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:07:29.804260 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:07:29.804263 | orchestrator | 2026-01-05 03:07:29.804268 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-01-05 03:07:29.804272 | orchestrator | Monday 05 January 2026 03:07:25 +0000 (0:00:03.140) 0:00:32.656 ******** 2026-01-05 03:07:29.804276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-05 03:07:29.804280 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-05 03:07:29.804285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-05 03:07:29.804296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-05 03:10:24.134372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-05 03:10:24.134510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-05 03:10:24.134527 | orchestrator | 2026-01-05 03:10:24.134535 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-05 03:10:24.134545 | orchestrator | Monday 05 January 2026 03:07:29 +0000 (0:00:04.111) 0:00:36.768 ******** 2026-01-05 03:10:24.134552 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:10:24.134559 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:10:24.134565 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:10:24.134571 | orchestrator | 2026-01-05 03:10:24.134579 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-05 03:10:24.134585 | orchestrator | Monday 05 January 2026 03:07:31 +0000 (0:00:01.402) 0:00:38.171 ******** 2026-01-05 03:10:24.134591 | orchestrator | 2026-01-05 03:10:24.134598 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-05 03:10:24.134604 | orchestrator | Monday 05 January 2026 03:07:31 +0000 (0:00:00.457) 0:00:38.628 ******** 2026-01-05 03:10:24.134610 | orchestrator | 2026-01-05 03:10:24.134618 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-05 03:10:24.134647 | orchestrator | Monday 05 January 2026 03:07:32 +0000 (0:00:00.458) 0:00:39.087 ******** 2026-01-05 03:10:24.134653 | orchestrator | 2026-01-05 03:10:24.134660 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-01-05 03:10:24.134666 | orchestrator | Monday 05 January 2026 03:07:32 +0000 (0:00:00.806) 0:00:39.893 ******** 2026-01-05 03:10:24.134673 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:10:24.134680 | orchestrator | 2026-01-05 03:10:24.134686 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-01-05 03:10:24.134693 | orchestrator | Monday 05 January 2026 03:07:36 +0000 (0:00:03.820) 0:00:43.714 ******** 2026-01-05 03:10:24.134699 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:10:24.134705 | orchestrator | 2026-01-05 03:10:24.134712 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-01-05 03:10:24.134719 | orchestrator | Monday 05 January 2026 03:07:46 +0000 (0:00:09.350) 0:00:53.064 ******** 2026-01-05 03:10:24.134726 | orchestrator | changed: [testbed-node-0] 2026-01-05 03:10:24.134732 | orchestrator | changed: [testbed-node-1] 2026-01-05 03:10:24.134739 | orchestrator | changed: [testbed-node-2] 2026-01-05 03:10:24.134746 | orchestrator | 2026-01-05 03:10:24.134752 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-01-05 03:10:24.134758 | orchestrator | Monday 05 January 2026 03:08:49 +0000 (0:01:03.019) 0:01:56.084 ******** 2026-01-05 03:10:24.134765 | orchestrator | changed: [testbed-node-0] 2026-01-05 03:10:24.134771 | orchestrator | changed: [testbed-node-1] 2026-01-05 03:10:24.134778 | orchestrator | changed: [testbed-node-2] 2026-01-05 03:10:24.134785 | orchestrator | 2026-01-05 03:10:24.134791 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-05 03:10:24.134798 | orchestrator | Monday 05 January 2026 03:10:11 +0000 (0:01:22.314) 0:03:18.399 ******** 2026-01-05 03:10:24.134841 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 03:10:24.134850 | orchestrator | 2026-01-05 03:10:24.134856 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-01-05 03:10:24.134863 | orchestrator | Monday 05 January 2026 03:10:13 +0000 (0:00:01.794) 0:03:20.193 ******** 2026-01-05 03:10:24.134870 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:10:24.134876 | orchestrator | 2026-01-05 03:10:24.134883 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-01-05 03:10:24.134889 | orchestrator | Monday 05 January 2026 03:10:16 +0000 (0:00:03.627) 0:03:23.820 ******** 2026-01-05 03:10:24.134896 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:10:24.134903 | orchestrator | 2026-01-05 03:10:24.134909 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-01-05 03:10:24.134916 | orchestrator | Monday 05 January 2026 03:10:20 +0000 (0:00:03.973) 0:03:27.794 ******** 2026-01-05 03:10:24.134924 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:10:24.134928 | orchestrator | 2026-01-05 03:10:24.134933 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-01-05 03:10:24.134937 | orchestrator | Monday 05 January 2026 03:10:22 +0000 (0:00:01.249) 0:03:29.043 ******** 2026-01-05 03:10:24.134942 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:10:24.134946 | orchestrator | 2026-01-05 03:10:24.134951 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 03:10:24.134957 | orchestrator | testbed-node-0 : ok=18  changed=3  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-05 03:10:24.134963 | orchestrator | testbed-node-1 : ok=14  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-05 03:10:24.134967 | orchestrator | testbed-node-2 : ok=14  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-05 03:10:24.134972 | orchestrator | 2026-01-05 03:10:24.134976 | orchestrator | 2026-01-05 03:10:24.134981 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 03:10:24.134994 | orchestrator | Monday 05 January 2026 03:10:23 +0000 (0:00:01.630) 0:03:30.674 ******** 2026-01-05 03:10:24.135069 | orchestrator | =============================================================================== 2026-01-05 03:10:24.135076 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 82.32s 2026-01-05 03:10:24.135081 | orchestrator | opensearch : Restart opensearch container ------------------------------ 63.02s 2026-01-05 03:10:24.135085 | orchestrator | opensearch : Perform a flush -------------------------------------------- 9.35s 2026-01-05 03:10:24.135090 | orchestrator | opensearch : Check opensearch containers -------------------------------- 4.11s 2026-01-05 03:10:24.135094 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 3.97s 2026-01-05 03:10:24.135099 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 3.82s 2026-01-05 03:10:24.135103 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.77s 2026-01-05 03:10:24.135108 | orchestrator | opensearch : Copying over config.json files for services ---------------- 3.76s 2026-01-05 03:10:24.135112 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 3.63s 2026-01-05 03:10:24.135116 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.53s 2026-01-05 03:10:24.135121 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 3.14s 2026-01-05 03:10:24.135126 | orchestrator | opensearch : include_tasks ---------------------------------------------- 3.01s 2026-01-05 03:10:24.135131 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 2.52s 2026-01-05 03:10:24.135135 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 2.42s 2026-01-05 03:10:24.135140 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.89s 2026-01-05 03:10:24.135145 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.87s 2026-01-05 03:10:24.135149 | orchestrator | opensearch : include_tasks ---------------------------------------------- 1.79s 2026-01-05 03:10:24.135154 | orchestrator | opensearch : Flush handlers --------------------------------------------- 1.72s 2026-01-05 03:10:24.135158 | orchestrator | opensearch : include_tasks ---------------------------------------------- 1.70s 2026-01-05 03:10:24.135163 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.70s 2026-01-05 03:10:24.471170 | orchestrator | + osism apply -a upgrade memcached 2026-01-05 03:10:26.640934 | orchestrator | 2026-01-05 03:10:26 | INFO  | Task 48c73f6d-a1c0-4865-9790-2c73556c9913 (memcached) was prepared for execution. 2026-01-05 03:10:26.641092 | orchestrator | 2026-01-05 03:10:26 | INFO  | It takes a moment until task 48c73f6d-a1c0-4865-9790-2c73556c9913 (memcached) has been started and output is visible here. 2026-01-05 03:10:58.256615 | orchestrator | 2026-01-05 03:10:58.256733 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 03:10:58.256750 | orchestrator | 2026-01-05 03:10:58.256761 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 03:10:58.256771 | orchestrator | Monday 05 January 2026 03:10:32 +0000 (0:00:01.595) 0:00:01.595 ******** 2026-01-05 03:10:58.256781 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:10:58.256792 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:10:58.256801 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:10:58.256810 | orchestrator | 2026-01-05 03:10:58.256819 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 03:10:58.256850 | orchestrator | Monday 05 January 2026 03:10:34 +0000 (0:00:01.872) 0:00:03.468 ******** 2026-01-05 03:10:58.256861 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-01-05 03:10:58.256874 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-01-05 03:10:58.256884 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-01-05 03:10:58.256893 | orchestrator | 2026-01-05 03:10:58.256903 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-01-05 03:10:58.256939 | orchestrator | 2026-01-05 03:10:58.257053 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-01-05 03:10:58.257065 | orchestrator | Monday 05 January 2026 03:10:37 +0000 (0:00:03.103) 0:00:06.572 ******** 2026-01-05 03:10:58.257073 | orchestrator | included: /ansible/roles/memcached/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 03:10:58.257081 | orchestrator | 2026-01-05 03:10:58.257087 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-01-05 03:10:58.257093 | orchestrator | Monday 05 January 2026 03:10:39 +0000 (0:00:01.846) 0:00:08.419 ******** 2026-01-05 03:10:58.257100 | orchestrator | ok: [testbed-node-0] => (item=memcached) 2026-01-05 03:10:58.257107 | orchestrator | ok: [testbed-node-1] => (item=memcached) 2026-01-05 03:10:58.257113 | orchestrator | ok: [testbed-node-2] => (item=memcached) 2026-01-05 03:10:58.257119 | orchestrator | 2026-01-05 03:10:58.257125 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-01-05 03:10:58.257131 | orchestrator | Monday 05 January 2026 03:10:41 +0000 (0:00:01.905) 0:00:10.325 ******** 2026-01-05 03:10:58.257137 | orchestrator | ok: [testbed-node-1] => (item=memcached) 2026-01-05 03:10:58.257144 | orchestrator | ok: [testbed-node-0] => (item=memcached) 2026-01-05 03:10:58.257152 | orchestrator | ok: [testbed-node-2] => (item=memcached) 2026-01-05 03:10:58.257159 | orchestrator | 2026-01-05 03:10:58.257166 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-01-05 03:10:58.257173 | orchestrator | Monday 05 January 2026 03:10:44 +0000 (0:00:02.826) 0:00:13.151 ******** 2026-01-05 03:10:58.257181 | orchestrator | changed: [testbed-node-0] 2026-01-05 03:10:58.257188 | orchestrator | changed: [testbed-node-2] 2026-01-05 03:10:58.257195 | orchestrator | changed: [testbed-node-1] 2026-01-05 03:10:58.257202 | orchestrator | 2026-01-05 03:10:58.257209 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-01-05 03:10:58.257216 | orchestrator | Monday 05 January 2026 03:10:47 +0000 (0:00:02.904) 0:00:16.055 ******** 2026-01-05 03:10:58.257223 | orchestrator | changed: [testbed-node-0] 2026-01-05 03:10:58.257230 | orchestrator | changed: [testbed-node-1] 2026-01-05 03:10:58.257237 | orchestrator | changed: [testbed-node-2] 2026-01-05 03:10:58.257244 | orchestrator | 2026-01-05 03:10:58.257252 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 03:10:58.257260 | orchestrator | testbed-node-0 : ok=7  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 03:10:58.257268 | orchestrator | testbed-node-1 : ok=7  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 03:10:58.257276 | orchestrator | testbed-node-2 : ok=7  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 03:10:58.257284 | orchestrator | 2026-01-05 03:10:58.257290 | orchestrator | 2026-01-05 03:10:58.257296 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 03:10:58.257302 | orchestrator | Monday 05 January 2026 03:10:57 +0000 (0:00:10.511) 0:00:26.566 ******** 2026-01-05 03:10:58.257308 | orchestrator | =============================================================================== 2026-01-05 03:10:58.257315 | orchestrator | memcached : Restart memcached container -------------------------------- 10.51s 2026-01-05 03:10:58.257321 | orchestrator | Group hosts based on enabled services ----------------------------------- 3.10s 2026-01-05 03:10:58.257327 | orchestrator | memcached : Check memcached container ----------------------------------- 2.90s 2026-01-05 03:10:58.257333 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.83s 2026-01-05 03:10:58.257339 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.91s 2026-01-05 03:10:58.257345 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.87s 2026-01-05 03:10:58.257352 | orchestrator | memcached : include_tasks ----------------------------------------------- 1.85s 2026-01-05 03:10:58.598981 | orchestrator | + osism apply -a upgrade redis 2026-01-05 03:11:00.767698 | orchestrator | 2026-01-05 03:11:00 | INFO  | Task fed10de3-aa1d-42fe-892f-4636a7899e64 (redis) was prepared for execution. 2026-01-05 03:11:00.767782 | orchestrator | 2026-01-05 03:11:00 | INFO  | It takes a moment until task fed10de3-aa1d-42fe-892f-4636a7899e64 (redis) has been started and output is visible here. 2026-01-05 03:11:12.686482 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-01-05 03:11:12.686589 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-01-05 03:11:12.686609 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-01-05 03:11:12.686617 | orchestrator | (): 'NoneType' object is not subscriptable 2026-01-05 03:11:12.686632 | orchestrator | 2026-01-05 03:11:12.686641 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 03:11:12.686649 | orchestrator | 2026-01-05 03:11:12.686674 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 03:11:12.686683 | orchestrator | Monday 05 January 2026 03:11:06 +0000 (0:00:01.026) 0:00:01.026 ******** 2026-01-05 03:11:12.686690 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:11:12.686698 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:11:12.686705 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:11:12.686712 | orchestrator | 2026-01-05 03:11:12.686720 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 03:11:12.686727 | orchestrator | Monday 05 January 2026 03:11:06 +0000 (0:00:00.813) 0:00:01.839 ******** 2026-01-05 03:11:12.686735 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-01-05 03:11:12.686743 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-01-05 03:11:12.686750 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-01-05 03:11:12.686758 | orchestrator | 2026-01-05 03:11:12.686765 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-01-05 03:11:12.686772 | orchestrator | 2026-01-05 03:11:12.686779 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-01-05 03:11:12.686786 | orchestrator | Monday 05 January 2026 03:11:07 +0000 (0:00:00.879) 0:00:02.719 ******** 2026-01-05 03:11:12.686793 | orchestrator | included: /ansible/roles/redis/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 03:11:12.686802 | orchestrator | 2026-01-05 03:11:12.686810 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-01-05 03:11:12.686817 | orchestrator | Monday 05 January 2026 03:11:08 +0000 (0:00:01.010) 0:00:03.729 ******** 2026-01-05 03:11:12.686829 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-05 03:11:12.686852 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-05 03:11:12.686883 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-05 03:11:12.686892 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-05 03:11:12.686916 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-05 03:11:12.687035 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-05 03:11:12.687046 | orchestrator | 2026-01-05 03:11:12.687054 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-01-05 03:11:12.687063 | orchestrator | Monday 05 January 2026 03:11:10 +0000 (0:00:01.635) 0:00:05.365 ******** 2026-01-05 03:11:12.687071 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-05 03:11:12.687080 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-05 03:11:12.687098 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-05 03:11:12.687106 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-05 03:11:12.687194 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-05 03:11:18.738553 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-05 03:11:18.738648 | orchestrator | 2026-01-05 03:11:18.738659 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-01-05 03:11:18.738667 | orchestrator | Monday 05 January 2026 03:11:12 +0000 (0:00:02.195) 0:00:07.561 ******** 2026-01-05 03:11:18.738675 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-05 03:11:18.738685 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-05 03:11:18.738713 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-05 03:11:18.738726 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-05 03:11:18.738737 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-05 03:11:18.738772 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-05 03:11:18.738784 | orchestrator | 2026-01-05 03:11:18.738795 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-01-05 03:11:18.738806 | orchestrator | Monday 05 January 2026 03:11:15 +0000 (0:00:03.091) 0:00:10.653 ******** 2026-01-05 03:11:18.738817 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-01-05 03:11:18.738830 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-01-05 03:11:18.738853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-05 03:11:18.738864 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-05 03:11:18.738883 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-05 03:11:18.738890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-05 03:11:18.738897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-05 03:11:18.738976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-05 03:11:39.637332 | orchestrator | 2026-01-05 03:11:39.637424 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-05 03:11:39.637431 | orchestrator | Monday 05 January 2026 03:11:18 +0000 (0:00:02.469) 0:00:13.123 ******** 2026-01-05 03:11:39.637436 | orchestrator | 2026-01-05 03:11:39.637440 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-05 03:11:39.637444 | orchestrator | Monday 05 January 2026 03:11:18 +0000 (0:00:00.087) 0:00:13.210 ******** 2026-01-05 03:11:39.637448 | orchestrator | 2026-01-05 03:11:39.637452 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-05 03:11:39.637456 | orchestrator | Monday 05 January 2026 03:11:18 +0000 (0:00:00.293) 0:00:13.503 ******** 2026-01-05 03:11:39.637460 | orchestrator | 2026-01-05 03:11:39.637464 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-01-05 03:11:39.637488 | orchestrator | Monday 05 January 2026 03:11:18 +0000 (0:00:00.106) 0:00:13.609 ******** 2026-01-05 03:11:39.637493 | orchestrator | changed: [testbed-node-0] 2026-01-05 03:11:39.637499 | orchestrator | changed: [testbed-node-1] 2026-01-05 03:11:39.637506 | orchestrator | changed: [testbed-node-2] 2026-01-05 03:11:39.637511 | orchestrator | 2026-01-05 03:11:39.637518 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-01-05 03:11:39.637524 | orchestrator | Monday 05 January 2026 03:11:28 +0000 (0:00:09.666) 0:00:23.276 ******** 2026-01-05 03:11:39.637530 | orchestrator | changed: [testbed-node-0] 2026-01-05 03:11:39.637537 | orchestrator | changed: [testbed-node-1] 2026-01-05 03:11:39.637542 | orchestrator | changed: [testbed-node-2] 2026-01-05 03:11:39.637548 | orchestrator | 2026-01-05 03:11:39.637554 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 03:11:39.637561 | orchestrator | testbed-node-0 : ok=9  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 03:11:39.637570 | orchestrator | testbed-node-1 : ok=9  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 03:11:39.637577 | orchestrator | testbed-node-2 : ok=9  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 03:11:39.637584 | orchestrator | 2026-01-05 03:11:39.637591 | orchestrator | 2026-01-05 03:11:39.637598 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 03:11:39.637617 | orchestrator | Monday 05 January 2026 03:11:39 +0000 (0:00:10.782) 0:00:34.058 ******** 2026-01-05 03:11:39.637631 | orchestrator | =============================================================================== 2026-01-05 03:11:39.637637 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 10.78s 2026-01-05 03:11:39.637645 | orchestrator | redis : Restart redis container ----------------------------------------- 9.67s 2026-01-05 03:11:39.637649 | orchestrator | redis : Copying over redis config files --------------------------------- 3.09s 2026-01-05 03:11:39.637653 | orchestrator | redis : Check redis containers ------------------------------------------ 2.47s 2026-01-05 03:11:39.637657 | orchestrator | redis : Copying over default config.json files -------------------------- 2.20s 2026-01-05 03:11:39.637661 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.64s 2026-01-05 03:11:39.637665 | orchestrator | redis : include_tasks --------------------------------------------------- 1.01s 2026-01-05 03:11:39.637669 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.88s 2026-01-05 03:11:39.637673 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.81s 2026-01-05 03:11:39.637676 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.49s 2026-01-05 03:11:39.978265 | orchestrator | + osism apply -a upgrade mariadb 2026-01-05 03:11:42.205404 | orchestrator | 2026-01-05 03:11:42 | INFO  | Task 192f6262-3344-4faa-92f2-1d47281c82d3 (mariadb) was prepared for execution. 2026-01-05 03:11:42.205491 | orchestrator | 2026-01-05 03:11:42 | INFO  | It takes a moment until task 192f6262-3344-4faa-92f2-1d47281c82d3 (mariadb) has been started and output is visible here. 2026-01-05 03:12:08.383831 | orchestrator | 2026-01-05 03:12:08.383944 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 03:12:08.383956 | orchestrator | 2026-01-05 03:12:08.383963 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 03:12:08.383971 | orchestrator | Monday 05 January 2026 03:11:48 +0000 (0:00:01.570) 0:00:01.570 ******** 2026-01-05 03:12:08.383978 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:12:08.383986 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:12:08.383992 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:12:08.383999 | orchestrator | 2026-01-05 03:12:08.384006 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 03:12:08.384012 | orchestrator | Monday 05 January 2026 03:11:49 +0000 (0:00:01.779) 0:00:03.350 ******** 2026-01-05 03:12:08.384038 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-01-05 03:12:08.384045 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-01-05 03:12:08.384052 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-01-05 03:12:08.384058 | orchestrator | 2026-01-05 03:12:08.384066 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-01-05 03:12:08.384072 | orchestrator | 2026-01-05 03:12:08.384079 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-01-05 03:12:08.384098 | orchestrator | Monday 05 January 2026 03:11:52 +0000 (0:00:02.406) 0:00:05.757 ******** 2026-01-05 03:12:08.384105 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-05 03:12:08.384111 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-01-05 03:12:08.384118 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-01-05 03:12:08.384270 | orchestrator | 2026-01-05 03:12:08.384282 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-05 03:12:08.384292 | orchestrator | Monday 05 January 2026 03:11:53 +0000 (0:00:01.580) 0:00:07.338 ******** 2026-01-05 03:12:08.384304 | orchestrator | included: /ansible/roles/mariadb/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 03:12:08.384315 | orchestrator | 2026-01-05 03:12:08.384325 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-01-05 03:12:08.384334 | orchestrator | Monday 05 January 2026 03:11:55 +0000 (0:00:01.824) 0:00:09.163 ******** 2026-01-05 03:12:08.384352 | orchestrator | ok: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-05 03:12:08.384398 | orchestrator | ok: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-05 03:12:08.384424 | orchestrator | ok: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-05 03:12:08.384436 | orchestrator | 2026-01-05 03:12:08.384447 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-01-05 03:12:08.384459 | orchestrator | Monday 05 January 2026 03:11:59 +0000 (0:00:03.985) 0:00:13.148 ******** 2026-01-05 03:12:08.384469 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:12:08.384481 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:12:08.384488 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:12:08.384496 | orchestrator | 2026-01-05 03:12:08.384596 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-01-05 03:12:08.384605 | orchestrator | Monday 05 January 2026 03:12:01 +0000 (0:00:01.650) 0:00:14.799 ******** 2026-01-05 03:12:08.384644 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:12:08.384655 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:12:08.384666 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:12:08.384676 | orchestrator | 2026-01-05 03:12:08.384687 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-01-05 03:12:08.384705 | orchestrator | Monday 05 January 2026 03:12:03 +0000 (0:00:02.310) 0:00:17.109 ******** 2026-01-05 03:12:08.384736 | orchestrator | ok: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-05 03:12:21.214647 | orchestrator | ok: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-05 03:12:21.214740 | orchestrator | ok: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-05 03:12:21.214767 | orchestrator | 2026-01-05 03:12:21.214774 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-01-05 03:12:21.214791 | orchestrator | Monday 05 January 2026 03:12:08 +0000 (0:00:04.627) 0:00:21.736 ******** 2026-01-05 03:12:21.214796 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:12:21.214801 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:12:21.214806 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:12:21.214811 | orchestrator | 2026-01-05 03:12:21.214816 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-01-05 03:12:21.214904 | orchestrator | Monday 05 January 2026 03:12:10 +0000 (0:00:02.237) 0:00:23.973 ******** 2026-01-05 03:12:21.214911 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:12:21.214934 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:12:21.214941 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:12:21.214947 | orchestrator | 2026-01-05 03:12:21.214953 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-05 03:12:21.214960 | orchestrator | Monday 05 January 2026 03:12:15 +0000 (0:00:05.146) 0:00:29.120 ******** 2026-01-05 03:12:21.214967 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 03:12:21.214975 | orchestrator | 2026-01-05 03:12:21.214983 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-01-05 03:12:21.214990 | orchestrator | Monday 05 January 2026 03:12:17 +0000 (0:00:01.938) 0:00:31.059 ******** 2026-01-05 03:12:21.214999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 03:12:21.215017 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:12:21.215037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 03:12:28.924312 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:12:28.924425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 03:12:28.924464 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:12:28.924475 | orchestrator | 2026-01-05 03:12:28.924485 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-01-05 03:12:28.924496 | orchestrator | Monday 05 January 2026 03:12:21 +0000 (0:00:03.508) 0:00:34.567 ******** 2026-01-05 03:12:28.924521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 03:12:28.924531 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:12:28.924558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 03:12:28.924575 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:12:28.924586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 03:12:28.924595 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:12:28.924605 | orchestrator | 2026-01-05 03:12:28.924614 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-01-05 03:12:28.924628 | orchestrator | Monday 05 January 2026 03:12:24 +0000 (0:00:03.781) 0:00:38.349 ******** 2026-01-05 03:12:28.924644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 03:12:33.724424 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:12:33.724542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 03:12:33.724567 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:12:33.724604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-05 03:12:33.724649 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:12:33.724666 | orchestrator | 2026-01-05 03:12:33.724680 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-01-05 03:12:33.724695 | orchestrator | Monday 05 January 2026 03:12:28 +0000 (0:00:03.927) 0:00:42.276 ******** 2026-01-05 03:12:33.724738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-05 03:12:33.724763 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-05 03:12:33.724879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-05 03:15:09.083972 | orchestrator | 2026-01-05 03:15:09.084072 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-01-05 03:15:09.084083 | orchestrator | Monday 05 January 2026 03:12:33 +0000 (0:00:04.800) 0:00:47.077 ******** 2026-01-05 03:15:09.084089 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:15:09.084097 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:15:09.084103 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:15:09.084110 | orchestrator | 2026-01-05 03:15:09.084116 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-01-05 03:15:09.084122 | orchestrator | Monday 05 January 2026 03:12:35 +0000 (0:00:02.017) 0:00:49.094 ******** 2026-01-05 03:15:09.084128 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:15:09.084134 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:15:09.084140 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:15:09.084146 | orchestrator | 2026-01-05 03:15:09.084153 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-01-05 03:15:09.084158 | orchestrator | Monday 05 January 2026 03:12:37 +0000 (0:00:01.405) 0:00:50.499 ******** 2026-01-05 03:15:09.084164 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:15:09.084170 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:15:09.084176 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:15:09.084182 | orchestrator | 2026-01-05 03:15:09.084188 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-01-05 03:15:09.084194 | orchestrator | Monday 05 January 2026 03:12:38 +0000 (0:00:01.680) 0:00:52.180 ******** 2026-01-05 03:15:09.084200 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:15:09.084206 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:15:09.084211 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:15:09.084217 | orchestrator | 2026-01-05 03:15:09.084237 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-01-05 03:15:09.084244 | orchestrator | Monday 05 January 2026 03:12:40 +0000 (0:00:01.897) 0:00:54.078 ******** 2026-01-05 03:15:09.084250 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:15:09.084256 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:15:09.084282 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:15:09.084289 | orchestrator | 2026-01-05 03:15:09.084293 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-01-05 03:15:09.084297 | orchestrator | Monday 05 January 2026 03:12:42 +0000 (0:00:01.544) 0:00:55.622 ******** 2026-01-05 03:15:09.084301 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:15:09.084305 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:15:09.084309 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:15:09.084313 | orchestrator | 2026-01-05 03:15:09.084317 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-01-05 03:15:09.084320 | orchestrator | Monday 05 January 2026 03:12:43 +0000 (0:00:01.502) 0:00:57.124 ******** 2026-01-05 03:15:09.084324 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:15:09.084328 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:15:09.084332 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:15:09.084336 | orchestrator | 2026-01-05 03:15:09.084340 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-01-05 03:15:09.084344 | orchestrator | Monday 05 January 2026 03:12:47 +0000 (0:00:04.153) 0:01:01.278 ******** 2026-01-05 03:15:09.084350 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:15:09.084356 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:15:09.084362 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:15:09.084368 | orchestrator | 2026-01-05 03:15:09.084373 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-01-05 03:15:09.084379 | orchestrator | Monday 05 January 2026 03:12:49 +0000 (0:00:01.543) 0:01:02.822 ******** 2026-01-05 03:15:09.084385 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:15:09.084392 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:15:09.084399 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:15:09.084406 | orchestrator | 2026-01-05 03:15:09.084411 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-01-05 03:15:09.084415 | orchestrator | Monday 05 January 2026 03:12:50 +0000 (0:00:01.461) 0:01:04.284 ******** 2026-01-05 03:15:09.084419 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:15:09.084423 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:15:09.084427 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:15:09.084430 | orchestrator | 2026-01-05 03:15:09.084434 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-05 03:15:09.084438 | orchestrator | Monday 05 January 2026 03:12:53 +0000 (0:00:02.431) 0:01:06.716 ******** 2026-01-05 03:15:09.084442 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:15:09.084446 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:15:09.084449 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:15:09.084453 | orchestrator | 2026-01-05 03:15:09.084457 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-05 03:15:09.084461 | orchestrator | Monday 05 January 2026 03:12:54 +0000 (0:00:01.385) 0:01:08.101 ******** 2026-01-05 03:15:09.084465 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:15:09.084468 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:15:09.084484 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:15:09.084488 | orchestrator | 2026-01-05 03:15:09.084491 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-01-05 03:15:09.084501 | orchestrator | Monday 05 January 2026 03:12:56 +0000 (0:00:02.068) 0:01:10.170 ******** 2026-01-05 03:15:09.084505 | orchestrator | changed: [testbed-node-0] 2026-01-05 03:15:09.084509 | orchestrator | changed: [testbed-node-1] 2026-01-05 03:15:09.084514 | orchestrator | changed: [testbed-node-2] 2026-01-05 03:15:09.084519 | orchestrator | 2026-01-05 03:15:09.084523 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-01-05 03:15:09.084527 | orchestrator | Monday 05 January 2026 03:12:58 +0000 (0:00:01.512) 0:01:11.683 ******** 2026-01-05 03:15:09.084532 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:15:09.084537 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:15:09.084541 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:15:09.084546 | orchestrator | 2026-01-05 03:15:09.084555 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-01-05 03:15:09.084560 | orchestrator | 2026-01-05 03:15:09.084564 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-05 03:15:09.084569 | orchestrator | Monday 05 January 2026 03:13:00 +0000 (0:00:01.888) 0:01:13.571 ******** 2026-01-05 03:15:09.084573 | orchestrator | changed: [testbed-node-0] 2026-01-05 03:15:09.084578 | orchestrator | 2026-01-05 03:15:09.084595 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-05 03:15:09.084599 | orchestrator | Monday 05 January 2026 03:13:21 +0000 (0:00:21.055) 0:01:34.627 ******** 2026-01-05 03:15:09.084620 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:15:09.084625 | orchestrator | 2026-01-05 03:15:09.084629 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-05 03:15:09.084634 | orchestrator | Monday 05 January 2026 03:13:26 +0000 (0:00:04.845) 0:01:39.473 ******** 2026-01-05 03:15:09.084638 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:15:09.084643 | orchestrator | 2026-01-05 03:15:09.084648 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-01-05 03:15:09.084652 | orchestrator | 2026-01-05 03:15:09.084657 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-05 03:15:09.084661 | orchestrator | Monday 05 January 2026 03:13:30 +0000 (0:00:04.044) 0:01:43.518 ******** 2026-01-05 03:15:09.084666 | orchestrator | changed: [testbed-node-1] 2026-01-05 03:15:09.084671 | orchestrator | 2026-01-05 03:15:09.084675 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-05 03:15:09.084680 | orchestrator | Monday 05 January 2026 03:13:55 +0000 (0:00:25.749) 0:02:09.267 ******** 2026-01-05 03:15:09.084685 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Wait for MariaDB service port liveness (10 retries left). 2026-01-05 03:15:09.084691 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:15:09.084695 | orchestrator | 2026-01-05 03:15:09.084700 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-05 03:15:09.084709 | orchestrator | Monday 05 January 2026 03:14:04 +0000 (0:00:08.168) 0:02:17.436 ******** 2026-01-05 03:15:09.084715 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:15:09.084721 | orchestrator | 2026-01-05 03:15:09.084731 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-01-05 03:15:09.084738 | orchestrator | 2026-01-05 03:15:09.084743 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-05 03:15:09.084749 | orchestrator | Monday 05 January 2026 03:14:07 +0000 (0:00:03.757) 0:02:21.193 ******** 2026-01-05 03:15:09.084756 | orchestrator | changed: [testbed-node-2] 2026-01-05 03:15:09.084763 | orchestrator | 2026-01-05 03:15:09.084769 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-05 03:15:09.084776 | orchestrator | Monday 05 January 2026 03:14:33 +0000 (0:00:25.803) 0:02:46.997 ******** 2026-01-05 03:15:09.084781 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Wait for MariaDB service port liveness (10 retries left). 2026-01-05 03:15:09.084785 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:15:09.084789 | orchestrator | 2026-01-05 03:15:09.084793 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-05 03:15:09.084796 | orchestrator | Monday 05 January 2026 03:14:41 +0000 (0:00:08.108) 0:02:55.105 ******** 2026-01-05 03:15:09.084800 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-01-05 03:15:09.084804 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-01-05 03:15:09.084808 | orchestrator | mariadb_bootstrap_restart 2026-01-05 03:15:09.084812 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:15:09.084815 | orchestrator | 2026-01-05 03:15:09.084819 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-01-05 03:15:09.084823 | orchestrator | skipping: no hosts matched 2026-01-05 03:15:09.084827 | orchestrator | 2026-01-05 03:15:09.084831 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-01-05 03:15:09.084844 | orchestrator | skipping: no hosts matched 2026-01-05 03:15:09.084848 | orchestrator | 2026-01-05 03:15:09.084852 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-01-05 03:15:09.084856 | orchestrator | 2026-01-05 03:15:09.084860 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-01-05 03:15:09.084863 | orchestrator | Monday 05 January 2026 03:14:46 +0000 (0:00:05.058) 0:03:00.163 ******** 2026-01-05 03:15:09.084867 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 03:15:09.084871 | orchestrator | 2026-01-05 03:15:09.084875 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-01-05 03:15:09.084879 | orchestrator | Monday 05 January 2026 03:14:48 +0000 (0:00:01.887) 0:03:02.051 ******** 2026-01-05 03:15:09.084882 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:15:09.084886 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:15:09.084890 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:15:09.084894 | orchestrator | 2026-01-05 03:15:09.084898 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-01-05 03:15:09.084901 | orchestrator | Monday 05 January 2026 03:14:52 +0000 (0:00:03.797) 0:03:05.848 ******** 2026-01-05 03:15:09.084905 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:15:09.084909 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:15:09.084913 | orchestrator | changed: [testbed-node-0] 2026-01-05 03:15:09.084917 | orchestrator | 2026-01-05 03:15:09.084920 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-01-05 03:15:09.084924 | orchestrator | Monday 05 January 2026 03:14:56 +0000 (0:00:03.624) 0:03:09.473 ******** 2026-01-05 03:15:09.084928 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:15:09.084932 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:15:09.084936 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:15:09.084940 | orchestrator | 2026-01-05 03:15:09.084943 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-01-05 03:15:09.084947 | orchestrator | Monday 05 January 2026 03:14:59 +0000 (0:00:03.766) 0:03:13.239 ******** 2026-01-05 03:15:09.084951 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:15:09.084955 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:15:09.084959 | orchestrator | changed: [testbed-node-0] 2026-01-05 03:15:09.084962 | orchestrator | 2026-01-05 03:15:09.084966 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-01-05 03:15:09.084970 | orchestrator | Monday 05 January 2026 03:15:03 +0000 (0:00:04.023) 0:03:17.263 ******** 2026-01-05 03:15:09.084974 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:15:09.084978 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:15:09.084982 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:15:09.084985 | orchestrator | 2026-01-05 03:15:09.084989 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-01-05 03:15:09.084997 | orchestrator | Monday 05 January 2026 03:15:09 +0000 (0:00:05.161) 0:03:22.425 ******** 2026-01-05 03:15:30.828615 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 03:15:30.828701 | orchestrator | 2026-01-05 03:15:30.828708 | orchestrator | TASK [mariadb : Run upgrade in MariaDB container] ****************************** 2026-01-05 03:15:30.828713 | orchestrator | Monday 05 January 2026 03:15:10 +0000 (0:00:01.746) 0:03:24.171 ******** 2026-01-05 03:15:30.828718 | orchestrator | changed: [testbed-node-0] 2026-01-05 03:15:30.828724 | orchestrator | changed: [testbed-node-1] 2026-01-05 03:15:30.828728 | orchestrator | changed: [testbed-node-2] 2026-01-05 03:15:30.828732 | orchestrator | 2026-01-05 03:15:30.828737 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 03:15:30.828742 | orchestrator | testbed-node-0 : ok=32  changed=6  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-01-05 03:15:30.828749 | orchestrator | testbed-node-1 : ok=24  changed=4  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-01-05 03:15:30.828772 | orchestrator | testbed-node-2 : ok=24  changed=4  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-01-05 03:15:30.828776 | orchestrator | 2026-01-05 03:15:30.828780 | orchestrator | 2026-01-05 03:15:30.828785 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 03:15:30.828798 | orchestrator | Monday 05 January 2026 03:15:30 +0000 (0:00:19.540) 0:03:43.712 ******** 2026-01-05 03:15:30.828803 | orchestrator | =============================================================================== 2026-01-05 03:15:30.828807 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 72.61s 2026-01-05 03:15:30.828811 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 21.12s 2026-01-05 03:15:30.828815 | orchestrator | mariadb : Run upgrade in MariaDB container ----------------------------- 19.54s 2026-01-05 03:15:30.828819 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ----------------------- 12.86s 2026-01-05 03:15:30.828823 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 5.16s 2026-01-05 03:15:30.828827 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 5.15s 2026-01-05 03:15:30.828831 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 4.80s 2026-01-05 03:15:30.828836 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.63s 2026-01-05 03:15:30.828840 | orchestrator | mariadb : Check MariaDB service WSREP sync status ----------------------- 4.15s 2026-01-05 03:15:30.828844 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 4.02s 2026-01-05 03:15:30.828848 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.99s 2026-01-05 03:15:30.828852 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.93s 2026-01-05 03:15:30.828856 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 3.80s 2026-01-05 03:15:30.828860 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.78s 2026-01-05 03:15:30.828865 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 3.77s 2026-01-05 03:15:30.828869 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 3.63s 2026-01-05 03:15:30.828873 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.51s 2026-01-05 03:15:30.828878 | orchestrator | mariadb : Fail when MariaDB services are not synced across the whole cluster --- 2.43s 2026-01-05 03:15:30.828882 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.41s 2026-01-05 03:15:30.828886 | orchestrator | mariadb : Copying over my.cnf for mariabackup --------------------------- 2.31s 2026-01-05 03:15:31.189789 | orchestrator | + osism apply -a upgrade rabbitmq 2026-01-05 03:15:33.394706 | orchestrator | 2026-01-05 03:15:33 | INFO  | Task 4e58bf85-b408-4a3d-a5b8-96add8d88226 (rabbitmq) was prepared for execution. 2026-01-05 03:15:33.394796 | orchestrator | 2026-01-05 03:15:33 | INFO  | It takes a moment until task 4e58bf85-b408-4a3d-a5b8-96add8d88226 (rabbitmq) has been started and output is visible here. 2026-01-05 03:16:13.578259 | orchestrator | 2026-01-05 03:16:13.578369 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 03:16:13.578376 | orchestrator | 2026-01-05 03:16:13.578381 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 03:16:13.578387 | orchestrator | Monday 05 January 2026 03:15:39 +0000 (0:00:01.643) 0:00:01.643 ******** 2026-01-05 03:16:13.578391 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:16:13.578397 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:16:13.578401 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:16:13.578405 | orchestrator | 2026-01-05 03:16:13.578409 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 03:16:13.578413 | orchestrator | Monday 05 January 2026 03:15:41 +0000 (0:00:01.784) 0:00:03.428 ******** 2026-01-05 03:16:13.578445 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-01-05 03:16:13.578465 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-01-05 03:16:13.578470 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-01-05 03:16:13.578474 | orchestrator | 2026-01-05 03:16:13.578478 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-01-05 03:16:13.578482 | orchestrator | 2026-01-05 03:16:13.578487 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-05 03:16:13.578491 | orchestrator | Monday 05 January 2026 03:15:43 +0000 (0:00:02.081) 0:00:05.510 ******** 2026-01-05 03:16:13.578496 | orchestrator | included: /ansible/roles/rabbitmq/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 03:16:13.578501 | orchestrator | 2026-01-05 03:16:13.578505 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-01-05 03:16:13.578509 | orchestrator | Monday 05 January 2026 03:15:45 +0000 (0:00:02.738) 0:00:08.248 ******** 2026-01-05 03:16:13.578513 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:16:13.578517 | orchestrator | 2026-01-05 03:16:13.578520 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-01-05 03:16:13.578525 | orchestrator | Monday 05 January 2026 03:15:48 +0000 (0:00:02.117) 0:00:10.366 ******** 2026-01-05 03:16:13.578529 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:16:13.578578 | orchestrator | 2026-01-05 03:16:13.578583 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-01-05 03:16:13.578587 | orchestrator | Monday 05 January 2026 03:15:51 +0000 (0:00:03.549) 0:00:13.916 ******** 2026-01-05 03:16:13.578591 | orchestrator | changed: [testbed-node-0] 2026-01-05 03:16:13.578596 | orchestrator | 2026-01-05 03:16:13.578600 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-01-05 03:16:13.578604 | orchestrator | Monday 05 January 2026 03:15:56 +0000 (0:00:04.846) 0:00:18.762 ******** 2026-01-05 03:16:13.578608 | orchestrator | ok: [testbed-node-0] => { 2026-01-05 03:16:13.578612 | orchestrator |  "changed": false, 2026-01-05 03:16:13.578617 | orchestrator |  "msg": "All assertions passed" 2026-01-05 03:16:13.578621 | orchestrator | } 2026-01-05 03:16:13.578625 | orchestrator | 2026-01-05 03:16:13.578640 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-01-05 03:16:13.578644 | orchestrator | Monday 05 January 2026 03:15:57 +0000 (0:00:01.420) 0:00:20.182 ******** 2026-01-05 03:16:13.578648 | orchestrator | ok: [testbed-node-0] => { 2026-01-05 03:16:13.578652 | orchestrator |  "changed": false, 2026-01-05 03:16:13.578656 | orchestrator |  "msg": "All assertions passed" 2026-01-05 03:16:13.578660 | orchestrator | } 2026-01-05 03:16:13.578664 | orchestrator | 2026-01-05 03:16:13.578668 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-05 03:16:13.578672 | orchestrator | Monday 05 January 2026 03:15:59 +0000 (0:00:01.829) 0:00:22.012 ******** 2026-01-05 03:16:13.578676 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 03:16:13.578680 | orchestrator | 2026-01-05 03:16:13.578684 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-01-05 03:16:13.578688 | orchestrator | Monday 05 January 2026 03:16:01 +0000 (0:00:01.943) 0:00:23.955 ******** 2026-01-05 03:16:13.578692 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:16:13.578696 | orchestrator | 2026-01-05 03:16:13.578700 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-01-05 03:16:13.578704 | orchestrator | Monday 05 January 2026 03:16:03 +0000 (0:00:02.001) 0:00:25.956 ******** 2026-01-05 03:16:13.578708 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:16:13.578712 | orchestrator | 2026-01-05 03:16:13.578716 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-01-05 03:16:13.578720 | orchestrator | Monday 05 January 2026 03:16:07 +0000 (0:00:03.448) 0:00:29.405 ******** 2026-01-05 03:16:13.578724 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:16:13.578728 | orchestrator | 2026-01-05 03:16:13.578737 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-01-05 03:16:13.578741 | orchestrator | Monday 05 January 2026 03:16:09 +0000 (0:00:01.971) 0:00:31.376 ******** 2026-01-05 03:16:13.578761 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-05 03:16:13.578767 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-05 03:16:13.578776 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-05 03:16:13.578780 | orchestrator | 2026-01-05 03:16:13.578784 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-01-05 03:16:13.578788 | orchestrator | Monday 05 January 2026 03:16:11 +0000 (0:00:02.078) 0:00:33.455 ******** 2026-01-05 03:16:13.578793 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-05 03:16:13.578805 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-05 03:16:42.700622 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-05 03:16:42.700796 | orchestrator | 2026-01-05 03:16:42.700810 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-01-05 03:16:42.700819 | orchestrator | Monday 05 January 2026 03:16:13 +0000 (0:00:02.369) 0:00:35.824 ******** 2026-01-05 03:16:42.700826 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-05 03:16:42.700834 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-05 03:16:42.700864 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-05 03:16:42.700877 | orchestrator | 2026-01-05 03:16:42.700889 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-01-05 03:16:42.700901 | orchestrator | Monday 05 January 2026 03:16:16 +0000 (0:00:02.580) 0:00:38.404 ******** 2026-01-05 03:16:42.700912 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-05 03:16:42.700923 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-05 03:16:42.700935 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-05 03:16:42.700946 | orchestrator | 2026-01-05 03:16:42.701031 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-01-05 03:16:42.701048 | orchestrator | Monday 05 January 2026 03:16:19 +0000 (0:00:03.113) 0:00:41.518 ******** 2026-01-05 03:16:42.701060 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-05 03:16:42.701073 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-05 03:16:42.701084 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-05 03:16:42.701096 | orchestrator | 2026-01-05 03:16:42.701108 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-01-05 03:16:42.701121 | orchestrator | Monday 05 January 2026 03:16:21 +0000 (0:00:02.536) 0:00:44.054 ******** 2026-01-05 03:16:42.701143 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-05 03:16:42.701156 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-05 03:16:42.701168 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-05 03:16:42.701180 | orchestrator | 2026-01-05 03:16:42.701193 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-01-05 03:16:42.701205 | orchestrator | Monday 05 January 2026 03:16:24 +0000 (0:00:02.762) 0:00:46.817 ******** 2026-01-05 03:16:42.701217 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-05 03:16:42.701229 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-05 03:16:42.701241 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-05 03:16:42.701253 | orchestrator | 2026-01-05 03:16:42.701266 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-01-05 03:16:42.701278 | orchestrator | Monday 05 January 2026 03:16:27 +0000 (0:00:02.566) 0:00:49.384 ******** 2026-01-05 03:16:42.701290 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-05 03:16:42.701301 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-05 03:16:42.701314 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-05 03:16:42.701326 | orchestrator | 2026-01-05 03:16:42.701339 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-05 03:16:42.701350 | orchestrator | Monday 05 January 2026 03:16:29 +0000 (0:00:02.797) 0:00:52.181 ******** 2026-01-05 03:16:42.701363 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:16:42.701377 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:16:42.701388 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:16:42.701400 | orchestrator | 2026-01-05 03:16:42.701413 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-01-05 03:16:42.701447 | orchestrator | Monday 05 January 2026 03:16:31 +0000 (0:00:02.061) 0:00:54.243 ******** 2026-01-05 03:16:42.701459 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:16:42.701474 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:16:42.701486 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:16:42.701497 | orchestrator | 2026-01-05 03:16:42.701535 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-01-05 03:16:42.701546 | orchestrator | Monday 05 January 2026 03:16:36 +0000 (0:00:04.398) 0:00:58.642 ******** 2026-01-05 03:16:42.701560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-05 03:16:42.701590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-05 03:16:42.701606 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-05 03:16:42.701618 | orchestrator | 2026-01-05 03:16:42.701629 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-01-05 03:16:42.701640 | orchestrator | Monday 05 January 2026 03:16:39 +0000 (0:00:02.721) 0:01:01.363 ******** 2026-01-05 03:16:42.701651 | orchestrator | changed: [testbed-node-0] 2026-01-05 03:16:42.701662 | orchestrator | changed: [testbed-node-1] 2026-01-05 03:16:42.701674 | orchestrator | changed: [testbed-node-2] 2026-01-05 03:16:42.701685 | orchestrator | 2026-01-05 03:16:42.701697 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-05 03:16:42.701708 | orchestrator | 2026-01-05 03:16:42.701719 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-05 03:16:42.701730 | orchestrator | Monday 05 January 2026 03:16:40 +0000 (0:00:01.799) 0:01:03.163 ******** 2026-01-05 03:16:42.701741 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:16:42.701752 | orchestrator | 2026-01-05 03:16:42.701770 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-05 03:18:34.254576 | orchestrator | Monday 05 January 2026 03:16:42 +0000 (0:00:01.786) 0:01:04.950 ******** 2026-01-05 03:18:34.254661 | orchestrator | changed: [testbed-node-0] 2026-01-05 03:18:34.254669 | orchestrator | 2026-01-05 03:18:34.254674 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-05 03:18:34.254694 | orchestrator | Monday 05 January 2026 03:16:50 +0000 (0:00:08.063) 0:01:13.013 ******** 2026-01-05 03:18:34.254699 | orchestrator | changed: [testbed-node-0] 2026-01-05 03:18:34.254703 | orchestrator | 2026-01-05 03:18:34.254707 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-05 03:18:34.254711 | orchestrator | Monday 05 January 2026 03:17:00 +0000 (0:00:09.585) 0:01:22.598 ******** 2026-01-05 03:18:34.254715 | orchestrator | changed: [testbed-node-0] 2026-01-05 03:18:34.254719 | orchestrator | 2026-01-05 03:18:34.254722 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-05 03:18:34.254726 | orchestrator | 2026-01-05 03:18:34.254730 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-05 03:18:34.254734 | orchestrator | Monday 05 January 2026 03:17:14 +0000 (0:00:14.551) 0:01:37.150 ******** 2026-01-05 03:18:34.254738 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:18:34.254743 | orchestrator | 2026-01-05 03:18:34.254747 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-05 03:18:34.254751 | orchestrator | Monday 05 January 2026 03:17:16 +0000 (0:00:01.371) 0:01:38.522 ******** 2026-01-05 03:18:34.254754 | orchestrator | changed: [testbed-node-1] 2026-01-05 03:18:34.254758 | orchestrator | 2026-01-05 03:18:34.254762 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-05 03:18:34.254766 | orchestrator | Monday 05 January 2026 03:17:23 +0000 (0:00:07.322) 0:01:45.844 ******** 2026-01-05 03:18:34.254770 | orchestrator | changed: [testbed-node-1] 2026-01-05 03:18:34.254773 | orchestrator | 2026-01-05 03:18:34.254777 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-05 03:18:34.254781 | orchestrator | Monday 05 January 2026 03:17:37 +0000 (0:00:13.626) 0:01:59.471 ******** 2026-01-05 03:18:34.254785 | orchestrator | changed: [testbed-node-1] 2026-01-05 03:18:34.254788 | orchestrator | 2026-01-05 03:18:34.254804 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-05 03:18:34.254808 | orchestrator | 2026-01-05 03:18:34.254812 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-05 03:18:34.254816 | orchestrator | Monday 05 January 2026 03:17:51 +0000 (0:00:14.715) 0:02:14.187 ******** 2026-01-05 03:18:34.254820 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:18:34.254824 | orchestrator | 2026-01-05 03:18:34.254827 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-05 03:18:34.254831 | orchestrator | Monday 05 January 2026 03:17:53 +0000 (0:00:01.428) 0:02:15.615 ******** 2026-01-05 03:18:34.254835 | orchestrator | changed: [testbed-node-2] 2026-01-05 03:18:34.254839 | orchestrator | 2026-01-05 03:18:34.254842 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-05 03:18:34.254846 | orchestrator | Monday 05 January 2026 03:18:00 +0000 (0:00:07.538) 0:02:23.154 ******** 2026-01-05 03:18:34.254850 | orchestrator | changed: [testbed-node-2] 2026-01-05 03:18:34.254853 | orchestrator | 2026-01-05 03:18:34.254857 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-05 03:18:34.254861 | orchestrator | Monday 05 January 2026 03:18:14 +0000 (0:00:13.478) 0:02:36.633 ******** 2026-01-05 03:18:34.254865 | orchestrator | changed: [testbed-node-2] 2026-01-05 03:18:34.254868 | orchestrator | 2026-01-05 03:18:34.254872 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-01-05 03:18:34.254876 | orchestrator | 2026-01-05 03:18:34.254880 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-01-05 03:18:34.254883 | orchestrator | Monday 05 January 2026 03:18:29 +0000 (0:00:15.561) 0:02:52.194 ******** 2026-01-05 03:18:34.254887 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:18:34.254891 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-01-05 03:18:34.254895 | orchestrator | enable_outward_rabbitmq_True 2026-01-05 03:18:34.254899 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-01-05 03:18:34.254902 | orchestrator | outward_rabbitmq_restart 2026-01-05 03:18:34.254911 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:18:34.254915 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:18:34.254918 | orchestrator | 2026-01-05 03:18:34.254922 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-01-05 03:18:34.254926 | orchestrator | skipping: no hosts matched 2026-01-05 03:18:34.254930 | orchestrator | 2026-01-05 03:18:34.254934 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-01-05 03:18:34.254938 | orchestrator | skipping: no hosts matched 2026-01-05 03:18:34.254941 | orchestrator | 2026-01-05 03:18:34.254945 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-01-05 03:18:34.254949 | orchestrator | skipping: no hosts matched 2026-01-05 03:18:34.254953 | orchestrator | 2026-01-05 03:18:34.254956 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 03:18:34.254961 | orchestrator | testbed-node-0 : ok=26  changed=6  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-05 03:18:34.254967 | orchestrator | testbed-node-1 : ok=19  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 03:18:34.254971 | orchestrator | testbed-node-2 : ok=19  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-05 03:18:34.254974 | orchestrator | 2026-01-05 03:18:34.254978 | orchestrator | 2026-01-05 03:18:34.254982 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 03:18:34.254986 | orchestrator | Monday 05 January 2026 03:18:33 +0000 (0:00:03.899) 0:02:56.094 ******** 2026-01-05 03:18:34.254990 | orchestrator | =============================================================================== 2026-01-05 03:18:34.255004 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 44.83s 2026-01-05 03:18:34.255008 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 36.69s 2026-01-05 03:18:34.255012 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode --------------------- 22.92s 2026-01-05 03:18:34.255015 | orchestrator | rabbitmq : Get new RabbitMQ version ------------------------------------- 4.85s 2026-01-05 03:18:34.255019 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 4.59s 2026-01-05 03:18:34.255023 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 4.40s 2026-01-05 03:18:34.255027 | orchestrator | Include rabbitmq post-deploy.yml ---------------------------------------- 3.90s 2026-01-05 03:18:34.255030 | orchestrator | rabbitmq : Get current RabbitMQ version --------------------------------- 3.55s 2026-01-05 03:18:34.255034 | orchestrator | rabbitmq : List RabbitMQ policies --------------------------------------- 3.45s 2026-01-05 03:18:34.255038 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 3.11s 2026-01-05 03:18:34.255042 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.80s 2026-01-05 03:18:34.255045 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.76s 2026-01-05 03:18:34.255049 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 2.74s 2026-01-05 03:18:34.255053 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 2.72s 2026-01-05 03:18:34.255057 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.58s 2026-01-05 03:18:34.255063 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.57s 2026-01-05 03:18:34.255069 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 2.54s 2026-01-05 03:18:34.255074 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.37s 2026-01-05 03:18:34.255081 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 2.12s 2026-01-05 03:18:34.255091 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.08s 2026-01-05 03:18:34.589720 | orchestrator | + osism apply -a upgrade openvswitch 2026-01-05 03:18:36.777140 | orchestrator | 2026-01-05 03:18:36 | INFO  | Task fb5e1c12-2c46-4812-a4c5-64bfb361ed11 (openvswitch) was prepared for execution. 2026-01-05 03:18:36.777237 | orchestrator | 2026-01-05 03:18:36 | INFO  | It takes a moment until task fb5e1c12-2c46-4812-a4c5-64bfb361ed11 (openvswitch) has been started and output is visible here. 2026-01-05 03:19:05.697214 | orchestrator | 2026-01-05 03:19:05.697329 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 03:19:05.697345 | orchestrator | 2026-01-05 03:19:05.697356 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 03:19:05.697368 | orchestrator | Monday 05 January 2026 03:18:42 +0000 (0:00:01.744) 0:00:01.744 ******** 2026-01-05 03:19:05.697437 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:19:05.697451 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:19:05.697463 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:19:05.697473 | orchestrator | ok: [testbed-node-3] 2026-01-05 03:19:05.697485 | orchestrator | ok: [testbed-node-4] 2026-01-05 03:19:05.697495 | orchestrator | ok: [testbed-node-5] 2026-01-05 03:19:05.697506 | orchestrator | 2026-01-05 03:19:05.697518 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 03:19:05.697529 | orchestrator | Monday 05 January 2026 03:18:45 +0000 (0:00:02.860) 0:00:04.605 ******** 2026-01-05 03:19:05.697542 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-05 03:19:05.697554 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-05 03:19:05.697565 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-05 03:19:05.697576 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-05 03:19:05.697586 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-05 03:19:05.697597 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-05 03:19:05.697608 | orchestrator | 2026-01-05 03:19:05.697619 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-01-05 03:19:05.697630 | orchestrator | 2026-01-05 03:19:05.697641 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-01-05 03:19:05.697652 | orchestrator | Monday 05 January 2026 03:18:48 +0000 (0:00:03.206) 0:00:07.811 ******** 2026-01-05 03:19:05.697664 | orchestrator | included: /ansible/roles/openvswitch/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-05 03:19:05.697677 | orchestrator | 2026-01-05 03:19:05.697690 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-01-05 03:19:05.697704 | orchestrator | Monday 05 January 2026 03:18:52 +0000 (0:00:03.043) 0:00:10.855 ******** 2026-01-05 03:19:05.697717 | orchestrator | ok: [testbed-node-1] => (item=openvswitch) 2026-01-05 03:19:05.697731 | orchestrator | ok: [testbed-node-0] => (item=openvswitch) 2026-01-05 03:19:05.697744 | orchestrator | ok: [testbed-node-2] => (item=openvswitch) 2026-01-05 03:19:05.697757 | orchestrator | ok: [testbed-node-3] => (item=openvswitch) 2026-01-05 03:19:05.697769 | orchestrator | ok: [testbed-node-4] => (item=openvswitch) 2026-01-05 03:19:05.697782 | orchestrator | ok: [testbed-node-5] => (item=openvswitch) 2026-01-05 03:19:05.697794 | orchestrator | 2026-01-05 03:19:05.697807 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-01-05 03:19:05.697821 | orchestrator | Monday 05 January 2026 03:18:54 +0000 (0:00:02.358) 0:00:13.214 ******** 2026-01-05 03:19:05.697845 | orchestrator | ok: [testbed-node-3] => (item=openvswitch) 2026-01-05 03:19:05.697858 | orchestrator | ok: [testbed-node-2] => (item=openvswitch) 2026-01-05 03:19:05.697871 | orchestrator | ok: [testbed-node-1] => (item=openvswitch) 2026-01-05 03:19:05.697884 | orchestrator | ok: [testbed-node-4] => (item=openvswitch) 2026-01-05 03:19:05.697896 | orchestrator | ok: [testbed-node-0] => (item=openvswitch) 2026-01-05 03:19:05.697935 | orchestrator | ok: [testbed-node-5] => (item=openvswitch) 2026-01-05 03:19:05.697949 | orchestrator | 2026-01-05 03:19:05.697962 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-01-05 03:19:05.697977 | orchestrator | Monday 05 January 2026 03:18:57 +0000 (0:00:03.058) 0:00:16.272 ******** 2026-01-05 03:19:05.697990 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-01-05 03:19:05.698003 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:19:05.698072 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-01-05 03:19:05.698085 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:19:05.698096 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-01-05 03:19:05.698106 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:19:05.698117 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-01-05 03:19:05.698128 | orchestrator | skipping: [testbed-node-3] 2026-01-05 03:19:05.698139 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-01-05 03:19:05.698150 | orchestrator | skipping: [testbed-node-4] 2026-01-05 03:19:05.698160 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-01-05 03:19:05.698171 | orchestrator | skipping: [testbed-node-5] 2026-01-05 03:19:05.698182 | orchestrator | 2026-01-05 03:19:05.698193 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-01-05 03:19:05.698204 | orchestrator | Monday 05 January 2026 03:19:00 +0000 (0:00:02.950) 0:00:19.223 ******** 2026-01-05 03:19:05.698215 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:19:05.698226 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:19:05.698237 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:19:05.698248 | orchestrator | skipping: [testbed-node-3] 2026-01-05 03:19:05.698258 | orchestrator | skipping: [testbed-node-4] 2026-01-05 03:19:05.698269 | orchestrator | skipping: [testbed-node-5] 2026-01-05 03:19:05.698280 | orchestrator | 2026-01-05 03:19:05.698306 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-01-05 03:19:05.698327 | orchestrator | Monday 05 January 2026 03:19:02 +0000 (0:00:02.434) 0:00:21.658 ******** 2026-01-05 03:19:05.698361 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 03:19:05.698404 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 03:19:05.698424 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 03:19:05.698456 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 03:19:05.698470 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 03:19:05.698487 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 03:19:05.698508 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 03:19:08.148177 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 03:19:08.148315 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 03:19:08.148407 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 03:19:08.148431 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 03:19:08.148443 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 03:19:08.148456 | orchestrator | 2026-01-05 03:19:08.148470 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-01-05 03:19:08.148483 | orchestrator | Monday 05 January 2026 03:19:05 +0000 (0:00:02.868) 0:00:24.526 ******** 2026-01-05 03:19:08.148524 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 03:19:08.148685 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 03:19:08.148737 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 03:19:08.148752 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 03:19:08.148767 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 03:19:08.148786 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 03:19:08.148813 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 03:19:13.918345 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 03:19:13.918495 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 03:19:13.918504 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 03:19:13.918508 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 03:19:13.918524 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 03:19:13.918528 | orchestrator | 2026-01-05 03:19:13.918534 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-01-05 03:19:13.918539 | orchestrator | Monday 05 January 2026 03:19:09 +0000 (0:00:03.571) 0:00:28.097 ******** 2026-01-05 03:19:13.918545 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:19:13.918552 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:19:13.918560 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:19:13.918563 | orchestrator | skipping: [testbed-node-3] 2026-01-05 03:19:13.918567 | orchestrator | skipping: [testbed-node-4] 2026-01-05 03:19:13.918571 | orchestrator | skipping: [testbed-node-5] 2026-01-05 03:19:13.918579 | orchestrator | 2026-01-05 03:19:13.918583 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-01-05 03:19:13.918587 | orchestrator | Monday 05 January 2026 03:19:11 +0000 (0:00:02.506) 0:00:30.604 ******** 2026-01-05 03:19:13.918603 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 03:19:13.918610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 03:19:13.918614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 03:19:13.918618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 03:19:13.918625 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 03:19:13.918633 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-05 03:19:53.470917 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 03:19:53.471018 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 03:19:53.471041 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 03:19:53.471071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 03:19:53.471078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 03:19:53.471117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-05 03:19:53.471125 | orchestrator | 2026-01-05 03:19:53.471133 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-05 03:19:53.471141 | orchestrator | Monday 05 January 2026 03:19:15 +0000 (0:00:03.835) 0:00:34.440 ******** 2026-01-05 03:19:53.471147 | orchestrator | 2026-01-05 03:19:53.471154 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-05 03:19:53.471165 | orchestrator | Monday 05 January 2026 03:19:16 +0000 (0:00:00.531) 0:00:34.971 ******** 2026-01-05 03:19:53.471175 | orchestrator | 2026-01-05 03:19:53.471188 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-05 03:19:53.471205 | orchestrator | Monday 05 January 2026 03:19:16 +0000 (0:00:00.701) 0:00:35.672 ******** 2026-01-05 03:19:53.471215 | orchestrator | 2026-01-05 03:19:53.471226 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-05 03:19:53.471237 | orchestrator | Monday 05 January 2026 03:19:17 +0000 (0:00:00.562) 0:00:36.234 ******** 2026-01-05 03:19:53.471246 | orchestrator | 2026-01-05 03:19:53.471257 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-05 03:19:53.471267 | orchestrator | Monday 05 January 2026 03:19:17 +0000 (0:00:00.502) 0:00:36.737 ******** 2026-01-05 03:19:53.471277 | orchestrator | 2026-01-05 03:19:53.471288 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-05 03:19:53.471298 | orchestrator | Monday 05 January 2026 03:19:18 +0000 (0:00:00.509) 0:00:37.246 ******** 2026-01-05 03:19:53.471309 | orchestrator | 2026-01-05 03:19:53.471318 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-01-05 03:19:53.471324 | orchestrator | Monday 05 January 2026 03:19:19 +0000 (0:00:00.885) 0:00:38.132 ******** 2026-01-05 03:19:53.471330 | orchestrator | changed: [testbed-node-0] 2026-01-05 03:19:53.471396 | orchestrator | changed: [testbed-node-3] 2026-01-05 03:19:53.471409 | orchestrator | changed: [testbed-node-4] 2026-01-05 03:19:53.471420 | orchestrator | changed: [testbed-node-5] 2026-01-05 03:19:53.471426 | orchestrator | changed: [testbed-node-1] 2026-01-05 03:19:53.471433 | orchestrator | changed: [testbed-node-2] 2026-01-05 03:19:53.471439 | orchestrator | 2026-01-05 03:19:53.471446 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-01-05 03:19:53.471453 | orchestrator | Monday 05 January 2026 03:19:31 +0000 (0:00:12.222) 0:00:50.355 ******** 2026-01-05 03:19:53.471459 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:19:53.471467 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:19:53.471475 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:19:53.471482 | orchestrator | ok: [testbed-node-3] 2026-01-05 03:19:53.471489 | orchestrator | ok: [testbed-node-4] 2026-01-05 03:19:53.471497 | orchestrator | ok: [testbed-node-5] 2026-01-05 03:19:53.471504 | orchestrator | 2026-01-05 03:19:53.471511 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-01-05 03:19:53.471518 | orchestrator | Monday 05 January 2026 03:19:33 +0000 (0:00:02.207) 0:00:52.562 ******** 2026-01-05 03:19:53.471526 | orchestrator | changed: [testbed-node-0] 2026-01-05 03:19:53.471533 | orchestrator | changed: [testbed-node-3] 2026-01-05 03:19:53.471540 | orchestrator | changed: [testbed-node-4] 2026-01-05 03:19:53.471548 | orchestrator | changed: [testbed-node-5] 2026-01-05 03:19:53.471565 | orchestrator | changed: [testbed-node-1] 2026-01-05 03:19:53.471572 | orchestrator | changed: [testbed-node-2] 2026-01-05 03:19:53.471578 | orchestrator | 2026-01-05 03:19:53.471584 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-01-05 03:19:53.471590 | orchestrator | Monday 05 January 2026 03:19:45 +0000 (0:00:11.290) 0:01:03.853 ******** 2026-01-05 03:19:53.471597 | orchestrator | ok: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-01-05 03:19:53.471605 | orchestrator | ok: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-01-05 03:19:53.471611 | orchestrator | ok: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-01-05 03:19:53.471623 | orchestrator | ok: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-01-05 03:19:53.471629 | orchestrator | ok: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-01-05 03:19:53.471635 | orchestrator | ok: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-01-05 03:19:53.471642 | orchestrator | ok: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-01-05 03:19:53.471648 | orchestrator | ok: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-01-05 03:19:53.471654 | orchestrator | ok: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-01-05 03:19:53.471660 | orchestrator | ok: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-01-05 03:19:53.471667 | orchestrator | ok: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-01-05 03:19:53.471673 | orchestrator | ok: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-01-05 03:19:53.471680 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-05 03:19:53.471687 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-05 03:19:53.471693 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-05 03:19:53.471706 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-05 03:20:02.020409 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-05 03:20:02.020518 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-05 03:20:02.020529 | orchestrator | 2026-01-05 03:20:02.020540 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-01-05 03:20:02.020546 | orchestrator | Monday 05 January 2026 03:19:53 +0000 (0:00:08.440) 0:01:12.293 ******** 2026-01-05 03:20:02.020552 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-01-05 03:20:02.020556 | orchestrator | skipping: [testbed-node-3] 2026-01-05 03:20:02.020562 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-01-05 03:20:02.020566 | orchestrator | skipping: [testbed-node-4] 2026-01-05 03:20:02.020571 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-01-05 03:20:02.020575 | orchestrator | skipping: [testbed-node-5] 2026-01-05 03:20:02.020579 | orchestrator | ok: [testbed-node-0] => (item=br-ex) 2026-01-05 03:20:02.020584 | orchestrator | ok: [testbed-node-1] => (item=br-ex) 2026-01-05 03:20:02.020587 | orchestrator | ok: [testbed-node-2] => (item=br-ex) 2026-01-05 03:20:02.020591 | orchestrator | 2026-01-05 03:20:02.020595 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-01-05 03:20:02.020620 | orchestrator | Monday 05 January 2026 03:19:56 +0000 (0:00:03.530) 0:01:15.824 ******** 2026-01-05 03:20:02.020624 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-01-05 03:20:02.020628 | orchestrator | skipping: [testbed-node-3] 2026-01-05 03:20:02.020631 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-01-05 03:20:02.020635 | orchestrator | skipping: [testbed-node-4] 2026-01-05 03:20:02.020639 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-01-05 03:20:02.020657 | orchestrator | skipping: [testbed-node-5] 2026-01-05 03:20:02.020661 | orchestrator | ok: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-01-05 03:20:02.020665 | orchestrator | ok: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-01-05 03:20:02.020669 | orchestrator | ok: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-01-05 03:20:02.020673 | orchestrator | 2026-01-05 03:20:02.020677 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 03:20:02.020683 | orchestrator | testbed-node-0 : ok=14  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-05 03:20:02.020689 | orchestrator | testbed-node-1 : ok=14  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-05 03:20:02.020693 | orchestrator | testbed-node-2 : ok=14  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-05 03:20:02.020698 | orchestrator | testbed-node-3 : ok=12  changed=3  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-05 03:20:02.020702 | orchestrator | testbed-node-4 : ok=12  changed=3  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-05 03:20:02.020705 | orchestrator | testbed-node-5 : ok=12  changed=3  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-05 03:20:02.020709 | orchestrator | 2026-01-05 03:20:02.020713 | orchestrator | 2026-01-05 03:20:02.020727 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 03:20:02.020732 | orchestrator | Monday 05 January 2026 03:20:01 +0000 (0:00:04.470) 0:01:20.294 ******** 2026-01-05 03:20:02.020735 | orchestrator | =============================================================================== 2026-01-05 03:20:02.020739 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 12.22s 2026-01-05 03:20:02.020743 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 11.29s 2026-01-05 03:20:02.020747 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.44s 2026-01-05 03:20:02.020751 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.47s 2026-01-05 03:20:02.020754 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.84s 2026-01-05 03:20:02.020758 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 3.69s 2026-01-05 03:20:02.020762 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.57s 2026-01-05 03:20:02.020766 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.53s 2026-01-05 03:20:02.020770 | orchestrator | Group hosts based on enabled services ----------------------------------- 3.21s 2026-01-05 03:20:02.020774 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 3.06s 2026-01-05 03:20:02.020777 | orchestrator | openvswitch : include_tasks --------------------------------------------- 3.04s 2026-01-05 03:20:02.020781 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.95s 2026-01-05 03:20:02.020785 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.87s 2026-01-05 03:20:02.020789 | orchestrator | Group hosts based on Kolla action --------------------------------------- 2.86s 2026-01-05 03:20:02.020793 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 2.51s 2026-01-05 03:20:02.020801 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 2.43s 2026-01-05 03:20:02.020818 | orchestrator | module-load : Load modules ---------------------------------------------- 2.36s 2026-01-05 03:20:02.020824 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.21s 2026-01-05 03:20:02.431687 | orchestrator | + osism apply -a upgrade ovn 2026-01-05 03:20:04.556210 | orchestrator | 2026-01-05 03:20:04 | INFO  | Task d27ba1a1-2dba-4c98-9a4c-f8f5012aec26 (ovn) was prepared for execution. 2026-01-05 03:20:04.556307 | orchestrator | 2026-01-05 03:20:04 | INFO  | It takes a moment until task d27ba1a1-2dba-4c98-9a4c-f8f5012aec26 (ovn) has been started and output is visible here. 2026-01-05 03:20:26.555804 | orchestrator | 2026-01-05 03:20:26.555951 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 03:20:26.555969 | orchestrator | 2026-01-05 03:20:26.555981 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 03:20:26.555992 | orchestrator | Monday 05 January 2026 03:20:10 +0000 (0:00:01.653) 0:00:01.653 ******** 2026-01-05 03:20:26.556003 | orchestrator | ok: [testbed-node-3] 2026-01-05 03:20:26.556016 | orchestrator | ok: [testbed-node-4] 2026-01-05 03:20:26.556027 | orchestrator | ok: [testbed-node-5] 2026-01-05 03:20:26.556039 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:20:26.556049 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:20:26.556060 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:20:26.556071 | orchestrator | 2026-01-05 03:20:26.556081 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 03:20:26.556092 | orchestrator | Monday 05 January 2026 03:20:13 +0000 (0:00:02.741) 0:00:04.394 ******** 2026-01-05 03:20:26.556103 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-01-05 03:20:26.556114 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-01-05 03:20:26.556124 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-01-05 03:20:26.556135 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-01-05 03:20:26.556145 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-01-05 03:20:26.556156 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-01-05 03:20:26.556167 | orchestrator | 2026-01-05 03:20:26.556178 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-01-05 03:20:26.556188 | orchestrator | 2026-01-05 03:20:26.556198 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-01-05 03:20:26.556208 | orchestrator | Monday 05 January 2026 03:20:15 +0000 (0:00:02.273) 0:00:06.668 ******** 2026-01-05 03:20:26.556220 | orchestrator | included: /ansible/roles/ovn-controller/tasks/upgrade.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 03:20:26.556233 | orchestrator | 2026-01-05 03:20:26.556243 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-01-05 03:20:26.556254 | orchestrator | Monday 05 January 2026 03:20:18 +0000 (0:00:03.234) 0:00:09.903 ******** 2026-01-05 03:20:26.556268 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 03:20:26.556305 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 03:20:26.556371 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 03:20:26.556385 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 03:20:26.556396 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 03:20:26.556408 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 03:20:26.556419 | orchestrator | 2026-01-05 03:20:26.556450 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-01-05 03:20:26.556462 | orchestrator | Monday 05 January 2026 03:20:21 +0000 (0:00:02.400) 0:00:12.303 ******** 2026-01-05 03:20:26.556473 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 03:20:26.556484 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 03:20:26.556496 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 03:20:26.556506 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 03:20:26.556524 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 03:20:26.556546 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 03:20:26.556557 | orchestrator | 2026-01-05 03:20:26.556567 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-01-05 03:20:26.556577 | orchestrator | Monday 05 January 2026 03:20:24 +0000 (0:00:02.699) 0:00:15.003 ******** 2026-01-05 03:20:26.556589 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 03:20:26.556599 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 03:20:26.556610 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 03:20:26.556630 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 03:21:01.610989 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 03:21:01.611088 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 03:21:01.611098 | orchestrator | 2026-01-05 03:21:01.611107 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-01-05 03:21:01.611115 | orchestrator | Monday 05 January 2026 03:20:26 +0000 (0:00:02.529) 0:00:17.532 ******** 2026-01-05 03:21:01.611122 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 03:21:01.611158 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 03:21:01.611166 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 03:21:01.611172 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 03:21:01.611179 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 03:21:01.611185 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 03:21:01.611192 | orchestrator | 2026-01-05 03:21:01.611198 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-01-05 03:21:01.611205 | orchestrator | Monday 05 January 2026 03:20:29 +0000 (0:00:03.043) 0:00:20.576 ******** 2026-01-05 03:21:01.611225 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 03:21:01.611232 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 03:21:01.611238 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 03:21:01.611251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 03:21:01.611260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 03:21:01.611267 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 03:21:01.611273 | orchestrator | 2026-01-05 03:21:01.611280 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-01-05 03:21:01.611377 | orchestrator | Monday 05 January 2026 03:20:32 +0000 (0:00:03.066) 0:00:23.643 ******** 2026-01-05 03:21:01.611393 | orchestrator | ok: [testbed-node-5] 2026-01-05 03:21:01.611406 | orchestrator | ok: [testbed-node-3] 2026-01-05 03:21:01.611413 | orchestrator | ok: [testbed-node-4] 2026-01-05 03:21:01.611419 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:21:01.611425 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:21:01.611431 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:21:01.611437 | orchestrator | 2026-01-05 03:21:01.611444 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-01-05 03:21:01.611450 | orchestrator | Monday 05 January 2026 03:20:36 +0000 (0:00:04.019) 0:00:27.662 ******** 2026-01-05 03:21:01.611457 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-01-05 03:21:01.611464 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-01-05 03:21:01.611470 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-01-05 03:21:01.611476 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-01-05 03:21:01.611482 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-01-05 03:21:01.611488 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-01-05 03:21:01.611495 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-05 03:21:01.611501 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-05 03:21:01.611507 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-05 03:21:01.611514 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-05 03:21:01.611524 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-05 03:21:01.611540 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-05 03:21:01.611551 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-05 03:21:01.611563 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-05 03:21:01.611581 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-05 03:21:49.463164 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-05 03:21:49.463329 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-05 03:21:49.463344 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-05 03:21:49.463352 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-05 03:21:49.463360 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-05 03:21:49.463366 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-05 03:21:49.463373 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-05 03:21:49.463379 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-05 03:21:49.463385 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-05 03:21:49.463392 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-05 03:21:49.463401 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-05 03:21:49.463410 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-05 03:21:49.463422 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-05 03:21:49.463435 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-05 03:21:49.463444 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-05 03:21:49.463483 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-05 03:21:49.463502 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-05 03:21:49.463511 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-05 03:21:49.463521 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-05 03:21:49.463531 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-05 03:21:49.463537 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-05 03:21:49.463544 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-05 03:21:49.463551 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-05 03:21:49.463557 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-05 03:21:49.463562 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-05 03:21:49.463568 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-05 03:21:49.463574 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-05 03:21:49.463581 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-01-05 03:21:49.463594 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-01-05 03:21:49.463605 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-01-05 03:21:49.463641 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-01-05 03:21:49.463650 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-01-05 03:21:49.463660 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-01-05 03:21:49.463670 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-05 03:21:49.463679 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-05 03:21:49.463687 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-05 03:21:49.463715 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-05 03:21:49.463726 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-05 03:21:49.463736 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-05 03:21:49.463746 | orchestrator | 2026-01-05 03:21:49.463756 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-05 03:21:49.463767 | orchestrator | Monday 05 January 2026 03:20:58 +0000 (0:00:21.594) 0:00:49.257 ******** 2026-01-05 03:21:49.463799 | orchestrator | 2026-01-05 03:21:49.463809 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-05 03:21:49.463819 | orchestrator | Monday 05 January 2026 03:20:58 +0000 (0:00:00.458) 0:00:49.716 ******** 2026-01-05 03:21:49.463829 | orchestrator | 2026-01-05 03:21:49.463838 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-05 03:21:49.463847 | orchestrator | Monday 05 January 2026 03:20:59 +0000 (0:00:00.481) 0:00:50.198 ******** 2026-01-05 03:21:49.463857 | orchestrator | 2026-01-05 03:21:49.463868 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-05 03:21:49.463878 | orchestrator | Monday 05 January 2026 03:20:59 +0000 (0:00:00.439) 0:00:50.637 ******** 2026-01-05 03:21:49.463888 | orchestrator | 2026-01-05 03:21:49.463898 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-05 03:21:49.463907 | orchestrator | Monday 05 January 2026 03:21:00 +0000 (0:00:00.630) 0:00:51.267 ******** 2026-01-05 03:21:49.463916 | orchestrator | 2026-01-05 03:21:49.463924 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-05 03:21:49.463930 | orchestrator | Monday 05 January 2026 03:21:00 +0000 (0:00:00.459) 0:00:51.727 ******** 2026-01-05 03:21:49.463936 | orchestrator | 2026-01-05 03:21:49.463941 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-01-05 03:21:49.463947 | orchestrator | Monday 05 January 2026 03:21:01 +0000 (0:00:00.833) 0:00:52.561 ******** 2026-01-05 03:21:49.463953 | orchestrator | changed: [testbed-node-0] 2026-01-05 03:21:49.463960 | orchestrator | changed: [testbed-node-4] 2026-01-05 03:21:49.463966 | orchestrator | changed: [testbed-node-3] 2026-01-05 03:21:49.463971 | orchestrator | changed: [testbed-node-5] 2026-01-05 03:21:49.463983 | orchestrator | changed: [testbed-node-1] 2026-01-05 03:21:49.463988 | orchestrator | changed: [testbed-node-2] 2026-01-05 03:21:49.463994 | orchestrator | 2026-01-05 03:21:49.464000 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-01-05 03:21:49.464006 | orchestrator | 2026-01-05 03:21:49.464011 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-05 03:21:49.464017 | orchestrator | Monday 05 January 2026 03:21:34 +0000 (0:00:32.927) 0:01:25.489 ******** 2026-01-05 03:21:49.464030 | orchestrator | included: /ansible/roles/ovn-db/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 03:21:49.464036 | orchestrator | 2026-01-05 03:21:49.464041 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-05 03:21:49.464047 | orchestrator | Monday 05 January 2026 03:21:36 +0000 (0:00:01.875) 0:01:27.364 ******** 2026-01-05 03:21:49.464053 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 03:21:49.464059 | orchestrator | 2026-01-05 03:21:49.464064 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-01-05 03:21:49.464070 | orchestrator | Monday 05 January 2026 03:21:38 +0000 (0:00:02.016) 0:01:29.381 ******** 2026-01-05 03:21:49.464076 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:21:49.464087 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:21:49.464096 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:21:49.464104 | orchestrator | 2026-01-05 03:21:49.464112 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-01-05 03:21:49.464121 | orchestrator | Monday 05 January 2026 03:21:40 +0000 (0:00:01.908) 0:01:31.290 ******** 2026-01-05 03:21:49.464129 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:21:49.464138 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:21:49.464147 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:21:49.464157 | orchestrator | 2026-01-05 03:21:49.464166 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-01-05 03:21:49.464175 | orchestrator | Monday 05 January 2026 03:21:41 +0000 (0:00:01.479) 0:01:32.769 ******** 2026-01-05 03:21:49.464183 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:21:49.464193 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:21:49.464201 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:21:49.464211 | orchestrator | 2026-01-05 03:21:49.464220 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-01-05 03:21:49.464230 | orchestrator | Monday 05 January 2026 03:21:43 +0000 (0:00:01.619) 0:01:34.389 ******** 2026-01-05 03:21:49.464239 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:21:49.464249 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:21:49.464280 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:21:49.464287 | orchestrator | 2026-01-05 03:21:49.464293 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-01-05 03:21:49.464317 | orchestrator | Monday 05 January 2026 03:21:44 +0000 (0:00:01.404) 0:01:35.793 ******** 2026-01-05 03:21:49.464323 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:21:49.464329 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:21:49.464334 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:21:49.464340 | orchestrator | 2026-01-05 03:21:49.464346 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-01-05 03:21:49.464352 | orchestrator | Monday 05 January 2026 03:21:46 +0000 (0:00:01.456) 0:01:37.249 ******** 2026-01-05 03:21:49.464357 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:21:49.464363 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:21:49.464369 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:21:49.464375 | orchestrator | 2026-01-05 03:21:49.464381 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-01-05 03:21:49.464386 | orchestrator | Monday 05 January 2026 03:21:47 +0000 (0:00:01.323) 0:01:38.573 ******** 2026-01-05 03:21:49.464392 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:21:49.464407 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:22:15.954801 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:22:15.954880 | orchestrator | 2026-01-05 03:22:15.954887 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-01-05 03:22:15.954892 | orchestrator | Monday 05 January 2026 03:21:49 +0000 (0:00:01.863) 0:01:40.437 ******** 2026-01-05 03:22:15.954896 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:22:15.954901 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:22:15.954905 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:22:15.954909 | orchestrator | 2026-01-05 03:22:15.954914 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-01-05 03:22:15.954934 | orchestrator | Monday 05 January 2026 03:21:51 +0000 (0:00:01.649) 0:01:42.086 ******** 2026-01-05 03:22:15.954939 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:22:15.954943 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:22:15.954946 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:22:15.954950 | orchestrator | 2026-01-05 03:22:15.954954 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-01-05 03:22:15.954958 | orchestrator | Monday 05 January 2026 03:21:53 +0000 (0:00:01.971) 0:01:44.058 ******** 2026-01-05 03:22:15.954962 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:22:15.954966 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:22:15.954970 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:22:15.954974 | orchestrator | 2026-01-05 03:22:15.954978 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-01-05 03:22:15.954981 | orchestrator | Monday 05 January 2026 03:21:54 +0000 (0:00:01.784) 0:01:45.843 ******** 2026-01-05 03:22:15.954985 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:22:15.954990 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:22:15.954993 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:22:15.954997 | orchestrator | 2026-01-05 03:22:15.955001 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-01-05 03:22:15.955005 | orchestrator | Monday 05 January 2026 03:21:56 +0000 (0:00:01.572) 0:01:47.416 ******** 2026-01-05 03:22:15.955008 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:22:15.955012 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:22:15.955016 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:22:15.955020 | orchestrator | 2026-01-05 03:22:15.955023 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-01-05 03:22:15.955027 | orchestrator | Monday 05 January 2026 03:21:57 +0000 (0:00:01.390) 0:01:48.806 ******** 2026-01-05 03:22:15.955031 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:22:15.955035 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:22:15.955048 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:22:15.955052 | orchestrator | 2026-01-05 03:22:15.955055 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-01-05 03:22:15.955059 | orchestrator | Monday 05 January 2026 03:21:59 +0000 (0:00:01.870) 0:01:50.677 ******** 2026-01-05 03:22:15.955063 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:22:15.955067 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:22:15.955070 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:22:15.955074 | orchestrator | 2026-01-05 03:22:15.955078 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-01-05 03:22:15.955082 | orchestrator | Monday 05 January 2026 03:22:01 +0000 (0:00:01.576) 0:01:52.253 ******** 2026-01-05 03:22:15.955086 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:22:15.955089 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:22:15.955093 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:22:15.955097 | orchestrator | 2026-01-05 03:22:15.955101 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-01-05 03:22:15.955104 | orchestrator | Monday 05 January 2026 03:22:03 +0000 (0:00:02.017) 0:01:54.271 ******** 2026-01-05 03:22:15.955108 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:22:15.955112 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:22:15.955116 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:22:15.955119 | orchestrator | 2026-01-05 03:22:15.955123 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-01-05 03:22:15.955128 | orchestrator | Monday 05 January 2026 03:22:04 +0000 (0:00:01.379) 0:01:55.650 ******** 2026-01-05 03:22:15.955131 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:22:15.955135 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:22:15.955139 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:22:15.955143 | orchestrator | 2026-01-05 03:22:15.955146 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-05 03:22:15.955150 | orchestrator | Monday 05 January 2026 03:22:06 +0000 (0:00:01.346) 0:01:56.996 ******** 2026-01-05 03:22:15.955158 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:22:15.955161 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:22:15.955165 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:22:15.955169 | orchestrator | 2026-01-05 03:22:15.955173 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-01-05 03:22:15.955177 | orchestrator | Monday 05 January 2026 03:22:07 +0000 (0:00:01.661) 0:01:58.658 ******** 2026-01-05 03:22:15.955183 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 03:22:15.955189 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 03:22:15.955203 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 03:22:15.955208 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 03:22:15.955213 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 03:22:15.955217 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 03:22:15.955225 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 03:22:15.955229 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 03:22:15.955233 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 03:22:15.955322 | orchestrator | 2026-01-05 03:22:15.955327 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-01-05 03:22:15.955330 | orchestrator | Monday 05 January 2026 03:22:10 +0000 (0:00:02.727) 0:02:01.386 ******** 2026-01-05 03:22:15.955334 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 03:22:15.955339 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 03:22:15.955343 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 03:22:15.955350 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 03:23:07.647739 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 03:23:07.647894 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 03:23:07.647925 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 03:23:07.647971 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 03:23:07.647992 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 03:23:07.648042 | orchestrator | 2026-01-05 03:23:07.648064 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-01-05 03:23:07.648085 | orchestrator | Monday 05 January 2026 03:22:15 +0000 (0:00:05.537) 0:02:06.924 ******** 2026-01-05 03:23:07.648105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 03:23:07.648126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 03:23:07.648145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 03:23:07.648165 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 03:23:07.648186 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 03:23:07.648260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 03:23:07.648281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 03:23:07.648293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 03:23:07.648312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-05 03:23:07.648325 | orchestrator | 2026-01-05 03:23:07.648345 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-05 03:23:07.648357 | orchestrator | Monday 05 January 2026 03:22:20 +0000 (0:00:04.083) 0:02:11.008 ******** 2026-01-05 03:23:07.648369 | orchestrator | 2026-01-05 03:23:07.648381 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-05 03:23:07.648393 | orchestrator | Monday 05 January 2026 03:22:20 +0000 (0:00:00.552) 0:02:11.560 ******** 2026-01-05 03:23:07.648404 | orchestrator | 2026-01-05 03:23:07.648416 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-05 03:23:07.648427 | orchestrator | Monday 05 January 2026 03:22:21 +0000 (0:00:00.485) 0:02:12.045 ******** 2026-01-05 03:23:07.648438 | orchestrator | 2026-01-05 03:23:07.648450 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-01-05 03:23:07.648461 | orchestrator | Monday 05 January 2026 03:22:21 +0000 (0:00:00.783) 0:02:12.828 ******** 2026-01-05 03:23:07.648473 | orchestrator | changed: [testbed-node-0] 2026-01-05 03:23:07.648486 | orchestrator | changed: [testbed-node-1] 2026-01-05 03:23:07.648497 | orchestrator | changed: [testbed-node-2] 2026-01-05 03:23:07.648508 | orchestrator | 2026-01-05 03:23:07.648518 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-01-05 03:23:07.648528 | orchestrator | Monday 05 January 2026 03:22:32 +0000 (0:00:10.556) 0:02:23.385 ******** 2026-01-05 03:23:07.648537 | orchestrator | changed: [testbed-node-0] 2026-01-05 03:23:07.648547 | orchestrator | changed: [testbed-node-1] 2026-01-05 03:23:07.648557 | orchestrator | changed: [testbed-node-2] 2026-01-05 03:23:07.648566 | orchestrator | 2026-01-05 03:23:07.648576 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-01-05 03:23:07.648585 | orchestrator | Monday 05 January 2026 03:22:42 +0000 (0:00:10.387) 0:02:33.772 ******** 2026-01-05 03:23:07.648595 | orchestrator | changed: [testbed-node-0] 2026-01-05 03:23:07.648605 | orchestrator | changed: [testbed-node-1] 2026-01-05 03:23:07.648614 | orchestrator | changed: [testbed-node-2] 2026-01-05 03:23:07.648625 | orchestrator | 2026-01-05 03:23:07.648642 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-01-05 03:23:07.648657 | orchestrator | Monday 05 January 2026 03:22:53 +0000 (0:00:11.113) 0:02:44.885 ******** 2026-01-05 03:23:07.648675 | orchestrator | Pausing for 5 seconds 2026-01-05 03:23:07.648692 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:23:07.648707 | orchestrator | 2026-01-05 03:23:07.648723 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-01-05 03:23:07.648740 | orchestrator | Monday 05 January 2026 03:23:00 +0000 (0:00:06.157) 0:02:51.043 ******** 2026-01-05 03:23:07.648757 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:23:07.648774 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:23:07.648790 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:23:07.648806 | orchestrator | 2026-01-05 03:23:07.648821 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-01-05 03:23:07.648832 | orchestrator | Monday 05 January 2026 03:23:01 +0000 (0:00:01.908) 0:02:52.951 ******** 2026-01-05 03:23:07.648841 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:23:07.648851 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:23:07.648861 | orchestrator | changed: [testbed-node-1] 2026-01-05 03:23:07.648870 | orchestrator | 2026-01-05 03:23:07.648880 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-01-05 03:23:07.648890 | orchestrator | Monday 05 January 2026 03:23:03 +0000 (0:00:01.799) 0:02:54.751 ******** 2026-01-05 03:23:07.648900 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:23:07.648910 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:23:07.648919 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:23:07.648929 | orchestrator | 2026-01-05 03:23:07.648939 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-01-05 03:23:07.648949 | orchestrator | Monday 05 January 2026 03:23:05 +0000 (0:00:02.159) 0:02:56.911 ******** 2026-01-05 03:23:07.648966 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:23:12.693988 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:23:12.694132 | orchestrator | changed: [testbed-node-0] 2026-01-05 03:23:12.694141 | orchestrator | 2026-01-05 03:23:12.694146 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-01-05 03:23:12.694152 | orchestrator | Monday 05 January 2026 03:23:07 +0000 (0:00:01.714) 0:02:58.626 ******** 2026-01-05 03:23:12.694156 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:23:12.694162 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:23:12.694166 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:23:12.694170 | orchestrator | 2026-01-05 03:23:12.694175 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-01-05 03:23:12.694179 | orchestrator | Monday 05 January 2026 03:23:09 +0000 (0:00:01.828) 0:03:00.454 ******** 2026-01-05 03:23:12.694184 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:23:12.694188 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:23:12.694256 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:23:12.694264 | orchestrator | 2026-01-05 03:23:12.694272 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 03:23:12.694279 | orchestrator | testbed-node-0 : ok=38  changed=7  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-05 03:23:12.694285 | orchestrator | testbed-node-1 : ok=37  changed=7  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-05 03:23:12.694291 | orchestrator | testbed-node-2 : ok=36  changed=6  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-05 03:23:12.694297 | orchestrator | testbed-node-3 : ok=11  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 03:23:12.694304 | orchestrator | testbed-node-4 : ok=11  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 03:23:12.694328 | orchestrator | testbed-node-5 : ok=11  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-05 03:23:12.694335 | orchestrator | 2026-01-05 03:23:12.694341 | orchestrator | 2026-01-05 03:23:12.694347 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 03:23:12.694352 | orchestrator | Monday 05 January 2026 03:23:12 +0000 (0:00:02.789) 0:03:03.244 ******** 2026-01-05 03:23:12.694358 | orchestrator | =============================================================================== 2026-01-05 03:23:12.694365 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 32.93s 2026-01-05 03:23:12.694371 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 21.59s 2026-01-05 03:23:12.694377 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 11.11s 2026-01-05 03:23:12.694383 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 10.56s 2026-01-05 03:23:12.694390 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 10.39s 2026-01-05 03:23:12.694396 | orchestrator | ovn-db : Wait for leader election --------------------------------------- 6.16s 2026-01-05 03:23:12.694401 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 5.54s 2026-01-05 03:23:12.694407 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 4.08s 2026-01-05 03:23:12.694413 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 4.02s 2026-01-05 03:23:12.694419 | orchestrator | ovn-controller : Flush handlers ----------------------------------------- 3.30s 2026-01-05 03:23:12.694424 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 3.23s 2026-01-05 03:23:12.694430 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 3.07s 2026-01-05 03:23:12.694437 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 3.04s 2026-01-05 03:23:12.694443 | orchestrator | ovn-db : Wait for ovn-sb-db --------------------------------------------- 2.79s 2026-01-05 03:23:12.694457 | orchestrator | Group hosts based on Kolla action --------------------------------------- 2.74s 2026-01-05 03:23:12.694464 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 2.73s 2026-01-05 03:23:12.694470 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.70s 2026-01-05 03:23:12.694477 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 2.53s 2026-01-05 03:23:12.694483 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 2.40s 2026-01-05 03:23:12.694487 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.27s 2026-01-05 03:23:13.027341 | orchestrator | + [[ true == \f\a\l\s\e ]] 2026-01-05 03:23:13.027418 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-01-05 03:23:13.027425 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/300-openstack.sh 2026-01-05 03:23:13.034704 | orchestrator | + set -e 2026-01-05 03:23:13.034795 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-05 03:23:13.034805 | orchestrator | ++ export INTERACTIVE=false 2026-01-05 03:23:13.035037 | orchestrator | ++ INTERACTIVE=false 2026-01-05 03:23:13.035054 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-05 03:23:13.035058 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-05 03:23:13.035063 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-01-05 03:23:13.035716 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-01-05 03:23:13.041248 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-05 03:23:13.041323 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-05 03:23:13.041329 | orchestrator | + osism apply -a upgrade keystone 2026-01-05 03:23:15.131284 | orchestrator | 2026-01-05 03:23:15 | INFO  | Task 74c80d39-82ea-4af2-8cdb-64ef392f247b (keystone) was prepared for execution. 2026-01-05 03:23:15.131360 | orchestrator | 2026-01-05 03:23:15 | INFO  | It takes a moment until task 74c80d39-82ea-4af2-8cdb-64ef392f247b (keystone) has been started and output is visible here. 2026-01-05 03:23:26.215716 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-01-05 03:23:26.215822 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-01-05 03:23:26.215837 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-01-05 03:23:26.215842 | orchestrator | (): 'NoneType' object is not subscriptable 2026-01-05 03:23:26.215852 | orchestrator | 2026-01-05 03:23:26.215858 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 03:23:26.215863 | orchestrator | 2026-01-05 03:23:26.215867 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 03:23:26.215872 | orchestrator | Monday 05 January 2026 03:23:20 +0000 (0:00:01.212) 0:00:01.212 ******** 2026-01-05 03:23:26.215877 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:23:26.215882 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:23:26.215887 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:23:26.215891 | orchestrator | 2026-01-05 03:23:26.215895 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 03:23:26.215900 | orchestrator | Monday 05 January 2026 03:23:21 +0000 (0:00:00.985) 0:00:02.198 ******** 2026-01-05 03:23:26.215904 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-01-05 03:23:26.215909 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-01-05 03:23:26.215913 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-01-05 03:23:26.215918 | orchestrator | 2026-01-05 03:23:26.215922 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-01-05 03:23:26.215927 | orchestrator | 2026-01-05 03:23:26.215945 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-05 03:23:26.215953 | orchestrator | Monday 05 January 2026 03:23:22 +0000 (0:00:01.127) 0:00:03.325 ******** 2026-01-05 03:23:26.215980 | orchestrator | included: /ansible/roles/keystone/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 03:23:26.215988 | orchestrator | 2026-01-05 03:23:26.215995 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-01-05 03:23:26.216001 | orchestrator | Monday 05 January 2026 03:23:23 +0000 (0:00:01.005) 0:00:04.331 ******** 2026-01-05 03:23:26.216013 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 03:23:26.216024 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 03:23:26.216049 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 03:23:26.216059 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 03:23:26.216072 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 03:23:26.216089 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 03:23:26.216095 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 03:23:26.216099 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 03:23:26.216109 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 03:23:32.936768 | orchestrator | 2026-01-05 03:23:32.936882 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-01-05 03:23:32.936899 | orchestrator | Monday 05 January 2026 03:23:26 +0000 (0:00:02.467) 0:00:06.798 ******** 2026-01-05 03:23:32.936910 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:23:32.936922 | orchestrator | 2026-01-05 03:23:32.936932 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-01-05 03:23:32.936942 | orchestrator | Monday 05 January 2026 03:23:26 +0000 (0:00:00.168) 0:00:06.967 ******** 2026-01-05 03:23:32.936952 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:23:32.936963 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:23:32.936972 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:23:32.936982 | orchestrator | 2026-01-05 03:23:32.936992 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-01-05 03:23:32.937027 | orchestrator | Monday 05 January 2026 03:23:26 +0000 (0:00:00.328) 0:00:07.296 ******** 2026-01-05 03:23:32.937037 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 03:23:32.937045 | orchestrator | 2026-01-05 03:23:32.937054 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-05 03:23:32.937064 | orchestrator | Monday 05 January 2026 03:23:27 +0000 (0:00:01.190) 0:00:08.487 ******** 2026-01-05 03:23:32.937074 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 03:23:32.937083 | orchestrator | 2026-01-05 03:23:32.937092 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-01-05 03:23:32.937101 | orchestrator | Monday 05 January 2026 03:23:29 +0000 (0:00:01.133) 0:00:09.621 ******** 2026-01-05 03:23:32.937130 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 03:23:32.937145 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 03:23:32.937204 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 03:23:32.937216 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 03:23:32.937235 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 03:23:32.937242 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 03:23:32.937249 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 03:23:32.937256 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 03:23:32.937262 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 03:23:32.937268 | orchestrator | 2026-01-05 03:23:32.937274 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-01-05 03:23:32.937280 | orchestrator | Monday 05 January 2026 03:23:32 +0000 (0:00:03.282) 0:00:12.903 ******** 2026-01-05 03:23:32.937293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-05 03:23:33.952874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 03:23:33.953010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 03:23:33.953035 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:23:33.953059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-05 03:23:33.953081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 03:23:33.953100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 03:23:33.953150 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:23:33.953240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-05 03:23:33.953263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 03:23:33.953280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 03:23:33.953298 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:23:33.953315 | orchestrator | 2026-01-05 03:23:33.953336 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-01-05 03:23:33.953358 | orchestrator | Monday 05 January 2026 03:23:32 +0000 (0:00:00.630) 0:00:13.534 ******** 2026-01-05 03:23:33.953381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-05 03:23:33.953417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 03:23:33.953446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 03:23:37.298527 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:23:37.298633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-05 03:23:37.298647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 03:23:37.298656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 03:23:37.298662 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:23:37.298680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-05 03:23:37.298714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 03:23:37.298738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 03:23:37.298746 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:23:37.298752 | orchestrator | 2026-01-05 03:23:37.298760 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-01-05 03:23:37.298767 | orchestrator | Monday 05 January 2026 03:23:33 +0000 (0:00:01.010) 0:00:14.545 ******** 2026-01-05 03:23:37.298774 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 03:23:37.298781 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 03:23:37.298794 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 03:23:37.298807 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 03:23:42.624347 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 03:23:42.624462 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 03:23:42.624477 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 03:23:42.624490 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 03:23:42.624525 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 03:23:42.624538 | orchestrator | 2026-01-05 03:23:42.624550 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-01-05 03:23:42.624565 | orchestrator | Monday 05 January 2026 03:23:37 +0000 (0:00:03.348) 0:00:17.893 ******** 2026-01-05 03:23:42.624720 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 03:23:42.624769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 03:23:42.624790 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 03:23:42.624820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 03:23:42.624833 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 03:23:42.624846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 03:23:42.624876 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 03:23:46.837870 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 03:23:46.837981 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 03:23:46.838014 | orchestrator | 2026-01-05 03:23:46.838065 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-01-05 03:23:46.838074 | orchestrator | Monday 05 January 2026 03:23:42 +0000 (0:00:05.320) 0:00:23.213 ******** 2026-01-05 03:23:46.838080 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:23:46.838088 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:23:46.838095 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:23:46.838101 | orchestrator | 2026-01-05 03:23:46.838109 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-01-05 03:23:46.838115 | orchestrator | Monday 05 January 2026 03:23:44 +0000 (0:00:01.498) 0:00:24.712 ******** 2026-01-05 03:23:46.838122 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:23:46.838130 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:23:46.838137 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:23:46.838143 | orchestrator | 2026-01-05 03:23:46.838150 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-01-05 03:23:46.838157 | orchestrator | Monday 05 January 2026 03:23:45 +0000 (0:00:00.912) 0:00:25.624 ******** 2026-01-05 03:23:46.838164 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:23:46.838217 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:23:46.838225 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:23:46.838232 | orchestrator | 2026-01-05 03:23:46.838239 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-01-05 03:23:46.838245 | orchestrator | Monday 05 January 2026 03:23:45 +0000 (0:00:00.342) 0:00:25.967 ******** 2026-01-05 03:23:46.838252 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:23:46.838259 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:23:46.838265 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:23:46.838272 | orchestrator | 2026-01-05 03:23:46.838279 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-01-05 03:23:46.838285 | orchestrator | Monday 05 January 2026 03:23:45 +0000 (0:00:00.338) 0:00:26.306 ******** 2026-01-05 03:23:46.838295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-05 03:23:46.838318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 03:23:46.838343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 03:23:46.838357 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:23:46.838365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-05 03:23:46.838372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 03:23:46.838379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 03:23:46.838386 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:23:46.838397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-05 03:23:46.838411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 03:24:09.148002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 03:24:09.148105 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:24:09.148118 | orchestrator | 2026-01-05 03:24:09.148126 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-05 03:24:09.148135 | orchestrator | Monday 05 January 2026 03:23:46 +0000 (0:00:01.117) 0:00:27.423 ******** 2026-01-05 03:24:09.148142 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:24:09.148149 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:24:09.148221 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:24:09.148229 | orchestrator | 2026-01-05 03:24:09.148236 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-01-05 03:24:09.148243 | orchestrator | Monday 05 January 2026 03:23:47 +0000 (0:00:00.360) 0:00:27.784 ******** 2026-01-05 03:24:09.148250 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-05 03:24:09.148258 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-05 03:24:09.148264 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-05 03:24:09.148271 | orchestrator | 2026-01-05 03:24:09.148278 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-01-05 03:24:09.148285 | orchestrator | Monday 05 January 2026 03:23:48 +0000 (0:00:01.756) 0:00:29.540 ******** 2026-01-05 03:24:09.148292 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 03:24:09.148299 | orchestrator | 2026-01-05 03:24:09.148307 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-01-05 03:24:09.148314 | orchestrator | Monday 05 January 2026 03:23:49 +0000 (0:00:00.998) 0:00:30.538 ******** 2026-01-05 03:24:09.148320 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:24:09.148327 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:24:09.148334 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:24:09.148341 | orchestrator | 2026-01-05 03:24:09.148347 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-01-05 03:24:09.148353 | orchestrator | Monday 05 January 2026 03:23:50 +0000 (0:00:00.574) 0:00:31.113 ******** 2026-01-05 03:24:09.148360 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 03:24:09.148366 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-05 03:24:09.148373 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-05 03:24:09.148380 | orchestrator | 2026-01-05 03:24:09.148387 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-01-05 03:24:09.148394 | orchestrator | Monday 05 January 2026 03:23:51 +0000 (0:00:01.378) 0:00:32.491 ******** 2026-01-05 03:24:09.148401 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:24:09.148408 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:24:09.148414 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:24:09.148421 | orchestrator | 2026-01-05 03:24:09.148427 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-01-05 03:24:09.148458 | orchestrator | Monday 05 January 2026 03:23:52 +0000 (0:00:00.387) 0:00:32.879 ******** 2026-01-05 03:24:09.148466 | orchestrator | ok: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-05 03:24:09.148471 | orchestrator | ok: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-05 03:24:09.148478 | orchestrator | ok: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-05 03:24:09.148484 | orchestrator | ok: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-05 03:24:09.148492 | orchestrator | ok: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-05 03:24:09.148499 | orchestrator | ok: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-05 03:24:09.148506 | orchestrator | ok: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-05 03:24:09.148530 | orchestrator | ok: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-05 03:24:09.148537 | orchestrator | ok: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-05 03:24:09.148544 | orchestrator | ok: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-05 03:24:09.148550 | orchestrator | ok: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-05 03:24:09.148557 | orchestrator | ok: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-05 03:24:09.148564 | orchestrator | ok: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-05 03:24:09.148570 | orchestrator | ok: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-05 03:24:09.148600 | orchestrator | ok: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-05 03:24:09.148608 | orchestrator | ok: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-05 03:24:09.148615 | orchestrator | ok: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-05 03:24:09.148622 | orchestrator | ok: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-05 03:24:09.148629 | orchestrator | ok: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-05 03:24:09.148636 | orchestrator | ok: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-05 03:24:09.148643 | orchestrator | ok: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-05 03:24:09.148649 | orchestrator | 2026-01-05 03:24:09.148656 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-01-05 03:24:09.148663 | orchestrator | Monday 05 January 2026 03:24:01 +0000 (0:00:09.596) 0:00:42.476 ******** 2026-01-05 03:24:09.148670 | orchestrator | ok: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-05 03:24:09.148678 | orchestrator | ok: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-05 03:24:09.148685 | orchestrator | ok: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-05 03:24:09.148692 | orchestrator | ok: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-05 03:24:09.148700 | orchestrator | ok: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-05 03:24:09.148707 | orchestrator | ok: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-05 03:24:09.148714 | orchestrator | 2026-01-05 03:24:09.148720 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-01-05 03:24:09.148727 | orchestrator | Monday 05 January 2026 03:24:05 +0000 (0:00:03.741) 0:00:46.217 ******** 2026-01-05 03:24:09.148737 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 03:24:09.148758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 03:24:09.148774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 03:24:26.892982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 03:24:26.893102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 03:24:26.893140 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 03:24:26.893171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 03:24:26.893184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 03:24:26.893189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 03:24:26.893193 | orchestrator | 2026-01-05 03:24:26.893199 | orchestrator | TASK [keystone : Enable log_bin_trust_function_creators function] ************** 2026-01-05 03:24:26.893205 | orchestrator | Monday 05 January 2026 03:24:09 +0000 (0:00:03.522) 0:00:49.740 ******** 2026-01-05 03:24:26.893209 | orchestrator | changed: [testbed-node-0] 2026-01-05 03:24:26.893214 | orchestrator | 2026-01-05 03:24:26.893218 | orchestrator | TASK [keystone : Init keystone database upgrade] ******************************* 2026-01-05 03:24:26.893221 | orchestrator | Monday 05 January 2026 03:24:11 +0000 (0:00:02.473) 0:00:52.214 ******** 2026-01-05 03:24:26.893225 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:24:26.893229 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:24:26.893245 | orchestrator | changed: [testbed-node-0] 2026-01-05 03:24:26.893249 | orchestrator | 2026-01-05 03:24:26.893253 | orchestrator | TASK [keystone : Finish keystone database upgrade] ***************************** 2026-01-05 03:24:26.893256 | orchestrator | Monday 05 January 2026 03:24:12 +0000 (0:00:00.545) 0:00:52.760 ******** 2026-01-05 03:24:26.893260 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:24:26.893264 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-01-05 03:24:26.893269 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-01-05 03:24:26.893281 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:24:26.893305 | orchestrator | changed: [testbed-node-2] 2026-01-05 03:24:26.893309 | orchestrator | 2026-01-05 03:24:26.893313 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-01-05 03:24:26.893316 | orchestrator | Monday 05 January 2026 03:24:12 +0000 (0:00:00.702) 0:00:53.462 ******** 2026-01-05 03:24:26.893320 | orchestrator | 2026-01-05 03:24:26.893324 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-01-05 03:24:26.893328 | orchestrator | Monday 05 January 2026 03:24:13 +0000 (0:00:00.310) 0:00:53.772 ******** 2026-01-05 03:24:26.893332 | orchestrator | 2026-01-05 03:24:26.893335 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-01-05 03:24:26.893339 | orchestrator | Monday 05 January 2026 03:24:13 +0000 (0:00:00.076) 0:00:53.849 ******** 2026-01-05 03:24:26.893345 | orchestrator | 2026-01-05 03:24:26.893351 | orchestrator | RUNNING HANDLER [keystone : Init keystone database upgrade] ******************** 2026-01-05 03:24:26.893356 | orchestrator | Monday 05 January 2026 03:24:13 +0000 (0:00:00.085) 0:00:53.934 ******** 2026-01-05 03:24:26.893553 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "Container exited with non-zero return code 1", "rc": 1, "stderr": "+ sudo -E kolla_set_configs\nINFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json\nINFO:__main__:Validating config file\nINFO:__main__:Kolla config strategy set to: COPY_ALWAYS\nINFO:__main__:Copying service configuration files\nINFO:__main__:Copying /var/lib/kolla/config_files/keystone-startup.sh to /usr/bin/keystone-startup.sh\nINFO:__main__:Setting permission for /usr/bin/keystone-startup.sh\nINFO:__main__:Copying /var/lib/kolla/config_files/keystone.conf to /etc/keystone/keystone.conf\nINFO:__main__:Setting permission for /etc/keystone/keystone.conf\nINFO:__main__:Copying /var/lib/kolla/config_files/wsgi-keystone.conf to /etc/apache2/conf-enabled/wsgi-keystone.conf\nINFO:__main__:Setting permission for /etc/apache2/conf-enabled/wsgi-keystone.conf\nINFO:__main__:Writing out command to execute\nINFO:__main__:Setting permission for /var/log/kolla\nINFO:__main__:Setting permission for /var/log/kolla/keystone/keystone.log\nINFO:__main__:Setting permission for /etc/keystone/fernet-keys\n++ cat /run_command\n+ CMD=/usr/bin/keystone-startup.sh\n+ ARGS=\n+ sudo kolla_copy_cacerts\nrehash: warning: skipping ca-certificates.crt,it does not contain exactly one certificate or CRL\n+ sudo kolla_install_projects\n+ [[ ! -n '' ]]\n+ . kolla_extend_start\n++ KEYSTONE_LOG_DIR=/var/log/kolla/keystone\n++ [[ ! -d /var/log/kolla/keystone ]]\n+++ stat -c %U:%G /var/log/kolla/keystone\n++ [[ keystone:kolla != \\k\\e\\y\\s\\t\\o\\n\\e\\:\\k\\o\\l\\l\\a ]]\n++ '[' '!' -f /var/log/kolla/keystone/keystone.log ']'\n+++ stat -c %U:%G /var/log/kolla/keystone/keystone.log\n++ [[ keystone:keystone != \\k\\e\\y\\s\\t\\o\\n\\e\\:\\k\\e\\y\\s\\t\\o\\n\\e ]]\n+++ stat -c %a /var/log/kolla/keystone\n++ [[ 2755 != \\7\\5\\5 ]]\n++ chmod 755 /var/log/kolla/keystone\n++ EXTRA_KEYSTONE_MANAGE_ARGS=\n++ [[ -n 0 ]]\n++ sudo -H -u keystone keystone-manage db_sync --expand\n2026-01-05 03:24:25.669 1075 DEBUG oslo_db.sqlalchemy.engines [-] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION _check_effective_sql_mode /var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/engines.py:342\n2026-01-05 03:24:25.678 1075 CRITICAL keystone [-] Unhandled error: sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (1193, \"Unknown system variable 'transaction_isolation'\")\n(Background on this error at: https://sqlalche.me/e/20/e3q8)\n2026-01-05 03:24:25.678 1075 ERROR keystone Traceback (most recent call last):\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 146, in __init__\n2026-01-05 03:24:25.678 1075 ERROR keystone self._dbapi_connection = engine.raw_connection()\n2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 3302, in raw_connection\n2026-01-05 03:24:25.678 1075 ERROR keystone return self.pool.connect()\n2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 449, in connect\n2026-01-05 03:24:25.678 1075 ERROR keystone return _ConnectionFairy._checkout(self)\n2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 1263, in _checkout\n2026-01-05 03:24:25.678 1075 ERROR keystone fairy = _ConnectionRecord.checkout(pool)\n2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 712, in checkout\n2026-01-05 03:24:25.678 1075 ERROR keystone rec = pool._do_get()\n2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 179, in _do_get\n2026-01-05 03:24:25.678 1075 ERROR keystone with util.safe_reraise():\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 146, in __exit__\n2026-01-05 03:24:25.678 1075 ERROR keystone raise exc_value.with_traceback(exc_tb)\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 177, in _do_get\n2026-01-05 03:24:25.678 1075 ERROR keystone return self._create_connection()\n2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 390, in _create_connection\n2026-01-05 03:24:25.678 1075 ERROR keystone return _ConnectionRecord(self)\n2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 674, in __init__\n2026-01-05 03:24:25.678 1075 ERROR keystone self.__connect()\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 914, in __connect\n2026-01-05 03:24:25.678 1075 ERROR keystone )._exec_w_sync_on_first_run(self.dbapi_connection, self)\n2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 483, in _exec_w_sync_on_first_run\n2026-01-05 03:24:25.678 1075 ERROR keystone self(*args, **kw)\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 497, in __call__\n2026-01-05 03:24:25.678 1075 ERROR keystone fn(*args, **kw)\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 1912, in go\n2026-01-05 03:24:25.678 1075 ERROR keystone return once_fn(*arg, **kw)\n2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/create.py\", line 749, in first_connect\n2026-01-05 03:24:25.678 1075 ERROR keystone dialect.initialize(c)\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2835, in initialize\n2026-01-05 03:24:25.678 1075 ERROR keystone default.DefaultDialect.initialize(self, connection)\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 532, in initialize\n2026-01-05 03:24:25.678 1075 ERROR keystone self.default_isolation_level = self.get_default_isolation_level(\n2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 583, in get_default_isolation_level\n2026-01-05 03:24:25.678 1075 ERROR keystone return self.get_isolation_level(dbapi_conn)\n2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2540, in get_isolation_level\n2026-01-05 03:24:25.678 1075 ERROR keystone cursor.execute(\"SELECT @@transaction_isolation\")\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 153, in execute\n2026-01-05 03:24:25.678 1075 ERROR keystone result = self._query(query)\n2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 322, in _query\n2026-01-05 03:24:25.678 1075 ERROR keystone conn.query(q)\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 563, in query\n2026-01-05 03:24:25.678 1075 ERROR keystone self._affected_rows = self._read_query_result(unbuffered=unbuffered)\n2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 825, in _read_query_result\n2026-01-05 03:24:25.678 1075 ERROR keystone result.read()\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 1199, in read\n2026-01-05 03:24:25.678 1075 ERROR keystone first_packet = self.connection._read_packet()\n2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 775, in _read_packet\n2026-01-05 03:24:25.678 1075 ERROR keystone packet.raise_for_error()\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/protocol.py\", line 219, in raise_for_error\n2026-01-05 03:24:25.678 1075 ERROR keystone err.raise_mysql_exception(self._data)\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/err.py\", line 150, in raise_mysql_exception\n2026-01-05 03:24:25.678 1075 ERROR keystone raise errorclass(errno, errval)\n2026-01-05 03:24:25.678 1075 ERROR keystone pymysql.err.OperationalError: (1193, \"Unknown system variable 'transaction_isolation'\")\n2026-01-05 03:24:25.678 1075 ERROR keystone \n2026-01-05 03:24:25.678 1075 ERROR keystone The above exception was the direct cause of the following exception:\n2026-01-05 03:24:25.678 1075 ERROR keystone \n2026-01-05 03:24:25.678 1075 ERROR keystone Traceback (most recent call last):\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/bin/keystone-manage\", line 7, in \n2026-01-05 03:24:25.678 1075 ERROR keystone sys.exit(main())\n2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/cmd/manage.py\", line 36, in main\n2026-01-05 03:24:25.678 1075 ERROR keystone cli.main(argv=sys.argv, developer_config_file=developer_config)\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/cmd/cli.py\", line 1733, in main\n2026-01-05 03:24:25.678 1075 ERROR keystone CONF.command.cmd_class.main()\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/cmd/cli.py\", line 487, in main\n2026-01-05 03:24:25.678 1075 ERROR keystone upgrades.expand_schema()\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/common/sql/upgrades.py\", line 287, in expand_schema\n2026-01-05 03:24:25.678 1075 ERROR keystone _db_sync(EXPAND_BRANCH, engine=engine)\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/common/sql/upgrades.py\", line 217, in _db_sync\n2026-01-05 03:24:25.678 1075 ERROR keystone with sql.session_for_write() as session:\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/usr/lib/python3.12/contextlib.py\", line 137, in __enter__\n2026-01-05 03:24:25.678 1075 ERROR keystone return next(self.gen)\n2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 1042, in _transaction_scope\n2026-01-05 03:24:25.678 1075 ERROR keystone with current._produce_block(\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/usr/lib/python3.12/contextlib.py\", line 137, in __enter__\n2026-01-05 03:24:25.678 1075 ERROR keystone return next(self.gen)\n2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 641, in _session\n2026-01-05 03:24:25.678 1075 ERROR keystone self.session = self.factory._create_session(\n2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 404, in _create_session\n2026-01-05 03:24:25.678 1075 ERROR keystone self._start()\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 493, in _start\n2026-01-05 03:24:25.678 1075 ERROR keystone self._setup_for_connection(\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 519, in _setup_for_connection\n2026-01-05 03:24:25.678 1075 ERROR keystone engine = engines.create_engine(\n2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/debtcollector/renames.py\", line 41, in decorator\n2026-01-05 03:24:25.678 1075 ERROR keystone return wrapped(*args, **kwargs)\n2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/engines.py\", line 218, in create_engine\n2026-01-05 03:24:25.678 1075 ERROR keystone test_conn = _test_connection(engine, max_retries, retry_interval)\n2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/engines.py\", line 411, in _test_connection\n2026-01-05 03:24:25.678 1075 ERROR keystone return engine.connect()\n2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 3278, in connect\n2026-01-05 03:24:25.678 1075 ERROR keystone return self._connection_cls(self)\n2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 148, in __init__\n2026-01-05 03:24:25.678 1075 ERROR keystone Connection._handle_dbapi_exception_noconnection(\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 2439, in _handle_dbapi_exception_noconnection\n2026-01-05 03:24:25.678 1075 ERROR keystone raise newraise.with_traceback(exc_info[2]) from e\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 146, in __init__\n2026-01-05 03:24:25.678 1075 ERROR keystone self._dbapi_connection = engine.raw_connection()\n2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 3302, in raw_connection\n2026-01-05 03:24:25.678 1075 ERROR keystone return self.pool.connect()\n2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 449, in connect\n2026-01-05 03:24:25.678 1075 ERROR keystone return _ConnectionFairy._checkout(self)\n2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 1263, in _checkout\n2026-01-05 03:24:25.678 1075 ERROR keystone fairy = _ConnectionRecord.checkout(pool)\n2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 712, in checkout\n2026-01-05 03:24:25.678 1075 ERROR keystone rec = pool._do_get()\n2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 179, in _do_get\n2026-01-05 03:24:25.678 1075 ERROR keystone with util.safe_reraise():\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 146, in __exit__\n2026-01-05 03:24:25.678 1075 ERROR keystone raise exc_value.with_traceback(exc_tb)\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 177, in _do_get\n2026-01-05 03:24:25.678 1075 ERROR keystone return self._create_connection()\n2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 390, in _create_connection\n2026-01-05 03:24:25.678 1075 ERROR keystone return _ConnectionRecord(self)\n2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 674, in __init__\n2026-01-05 03:24:25.678 1075 ERROR keystone self.__connect()\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 914, in __connect\n2026-01-05 03:24:25.678 1075 ERROR keystone )._exec_w_sync_on_first_run(self.dbapi_connection, self)\n2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 483, in _exec_w_sync_on_first_run\n2026-01-05 03:24:25.678 1075 ERROR keystone self(*args, **kw)\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 497, in __call__\n2026-01-05 03:24:25.678 1075 ERROR keystone fn(*args, **kw)\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 1912, in go\n2026-01-05 03:24:25.678 1075 ERROR keystone return once_fn(*arg, **kw)\n2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/create.py\", line 749, in first_connect\n2026-01-05 03:24:25.678 1075 ERROR keystone dialect.initialize(c)\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2835, in initialize\n2026-01-05 03:24:25.678 1075 ERROR keystone default.DefaultDialect.initialize(self, connection)\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 532, in initialize\n2026-01-05 03:24:25.678 1075 ERROR keystone self.default_isolation_level = self.get_default_isolation_level(\n2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 583, in get_default_isolation_level\n2026-01-05 03:24:25.678 1075 ERROR keystone return self.get_isolation_level(dbapi_conn)\n2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2540, in get_isolation_level\n2026-01-05 03:24:25.678 1075 ERROR keystone cursor.execute(\"SELECT @@transaction_isolation\")\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 153, in execute\n2026-01-05 03:24:25.678 1075 ERROR keystone result = self._query(query)\n2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 322, in _query\n2026-01-05 03:24:25.678 1075 ERROR keystone conn.query(q)\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 563, in query\n2026-01-05 03:24:25.678 1075 ERROR keystone self._affected_rows = self._read_query_result(unbuffered=unbuffered)\n2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 825, in _read_query_result\n2026-01-05 03:24:25.678 1075 ERROR keystone result.read()\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 1199, in read\n2026-01-05 03:24:25.678 1075 ERROR keystone first_packet = self.connection._read_packet()\n2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 775, in _read_packet\n2026-01-05 03:24:25.678 1075 ERROR keystone packet.raise_for_error()\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/protocol.py\", line 219, in raise_for_error\n2026-01-05 03:24:25.678 1075 ERROR keystone err.raise_mysql_exception(self._data)\n2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/err.py\", line 150, in raise_mysql_exception\n2026-01-05 03:24:25.678 1075 ERROR keystone raise errorclass(errno, errval)\n2026-01-05 03:24:25.678 1075 ERROR keystone sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (1193, \"Unknown system variable 'transaction_isolation'\")\n2026-01-05 03:24:25.678 1075 ERROR keystone (Background on this error at: https://sqlalche.me/e/20/e3q8)\n2026-01-05 03:24:25.678 1075 ERROR keystone \n", "stderr_lines": ["+ sudo -E kolla_set_configs", "INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json", "INFO:__main__:Validating config file", "INFO:__main__:Kolla config strategy set to: COPY_ALWAYS", "INFO:__main__:Copying service configuration files", "INFO:__main__:Copying /var/lib/kolla/config_files/keystone-startup.sh to /usr/bin/keystone-startup.sh", "INFO:__main__:Setting permission for /usr/bin/keystone-startup.sh", "INFO:__main__:Copying /var/lib/kolla/config_files/keystone.conf to /etc/keystone/keystone.conf", "INFO:__main__:Setting permission for /etc/keystone/keystone.conf", "INFO:__main__:Copying /var/lib/kolla/config_files/wsgi-keystone.conf to /etc/apache2/conf-enabled/wsgi-keystone.conf", "INFO:__main__:Setting permission for /etc/apache2/conf-enabled/wsgi-keystone.conf", "INFO:__main__:Writing out command to execute", "INFO:__main__:Setting permission for /var/log/kolla", "INFO:__main__:Setting permission for /var/log/kolla/keystone/keystone.log", "INFO:__main__:Setting permission for /etc/keystone/fernet-keys", "++ cat /run_command", "+ CMD=/usr/bin/keystone-startup.sh", "+ ARGS=", "+ sudo kolla_copy_cacerts", "rehash: warning: skipping ca-certificates.crt,it does not contain exactly one certificate or CRL", "+ sudo kolla_install_projects", "+ [[ ! -n '' ]]", "+ . kolla_extend_start", "++ KEYSTONE_LOG_DIR=/var/log/kolla/keystone", "++ [[ ! -d /var/log/kolla/keystone ]]", "+++ stat -c %U:%G /var/log/kolla/keystone", "++ [[ keystone:kolla != \\k\\e\\y\\s\\t\\o\\n\\e\\:\\k\\o\\l\\l\\a ]]", "++ '[' '!' -f /var/log/kolla/keystone/keystone.log ']'", "+++ stat -c %U:%G /var/log/kolla/keystone/keystone.log", "++ [[ keystone:keystone != \\k\\e\\y\\s\\t\\o\\n\\e\\:\\k\\e\\y\\s\\t\\o\\n\\e ]]", "+++ stat -c %a /var/log/kolla/keystone", "++ [[ 2755 != \\7\\5\\5 ]]", "++ chmod 755 /var/log/kolla/keystone", "++ EXTRA_KEYSTONE_MANAGE_ARGS=", "++ [[ -n 0 ]]", "++ sudo -H -u keystone keystone-manage db_sync --expand", "2026-01-05 03:24:25.669 1075 DEBUG oslo_db.sqlalchemy.engines [-] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION _check_effective_sql_mode /var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/engines.py:342", "2026-01-05 03:24:25.678 1075 CRITICAL keystone [-] Unhandled error: sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (1193, \"Unknown system variable 'transaction_isolation'\")", "(Background on this error at: https://sqlalche.me/e/20/e3q8)", "2026-01-05 03:24:25.678 1075 ERROR keystone Traceback (most recent call last):", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 146, in __init__", "2026-01-05 03:24:25.678 1075 ERROR keystone self._dbapi_connection = engine.raw_connection()", "2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 3302, in raw_connection", "2026-01-05 03:24:25.678 1075 ERROR keystone return self.pool.connect()", "2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 449, in connect", "2026-01-05 03:24:25.678 1075 ERROR keystone return _ConnectionFairy._checkout(self)", "2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 1263, in _checkout", "2026-01-05 03:24:25.678 1075 ERROR keystone fairy = _ConnectionRecord.checkout(pool)", "2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 712, in checkout", "2026-01-05 03:24:25.678 1075 ERROR keystone rec = pool._do_get()", "2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 179, in _do_get", "2026-01-05 03:24:25.678 1075 ERROR keystone with util.safe_reraise():", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 146, in __exit__", "2026-01-05 03:24:25.678 1075 ERROR keystone raise exc_value.with_traceback(exc_tb)", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 177, in _do_get", "2026-01-05 03:24:25.678 1075 ERROR keystone return self._create_connection()", "2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 390, in _create_connection", "2026-01-05 03:24:25.678 1075 ERROR keystone return _ConnectionRecord(self)", "2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 674, in __init__", "2026-01-05 03:24:25.678 1075 ERROR keystone self.__connect()", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 914, in __connect", "2026-01-05 03:24:25.678 1075 ERROR keystone )._exec_w_sync_on_first_run(self.dbapi_connection, self)", "2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 483, in _exec_w_sync_on_first_run", "2026-01-05 03:24:25.678 1075 ERROR keystone self(*args, **kw)", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 497, in __call__", "2026-01-05 03:24:25.678 1075 ERROR keystone fn(*args, **kw)", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 1912, in go", "2026-01-05 03:24:25.678 1075 ERROR keystone return once_fn(*arg, **kw)", "2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/create.py\", line 749, in first_connect", "2026-01-05 03:24:25.678 1075 ERROR keystone dialect.initialize(c)", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2835, in initialize", "2026-01-05 03:24:25.678 1075 ERROR keystone default.DefaultDialect.initialize(self, connection)", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 532, in initialize", "2026-01-05 03:24:25.678 1075 ERROR keystone self.default_isolation_level = self.get_default_isolation_level(", "2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 583, in get_default_isolation_level", "2026-01-05 03:24:25.678 1075 ERROR keystone return self.get_isolation_level(dbapi_conn)", "2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2540, in get_isolation_level", "2026-01-05 03:24:25.678 1075 ERROR keystone cursor.execute(\"SELECT @@transaction_isolation\")", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 153, in execute", "2026-01-05 03:24:25.678 1075 ERROR keystone result = self._query(query)", "2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 322, in _query", "2026-01-05 03:24:25.678 1075 ERROR keystone conn.query(q)", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 563, in query", "2026-01-05 03:24:25.678 1075 ERROR keystone self._affected_rows = self._read_query_result(unbuffered=unbuffered)", "2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 825, in _read_query_result", "2026-01-05 03:24:25.678 1075 ERROR keystone result.read()", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 1199, in read", "2026-01-05 03:24:25.678 1075 ERROR keystone first_packet = self.connection._read_packet()", "2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 775, in _read_packet", "2026-01-05 03:24:25.678 1075 ERROR keystone packet.raise_for_error()", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/protocol.py\", line 219, in raise_for_error", "2026-01-05 03:24:25.678 1075 ERROR keystone err.raise_mysql_exception(self._data)", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/err.py\", line 150, in raise_mysql_exception", "2026-01-05 03:24:25.678 1075 ERROR keystone raise errorclass(errno, errval)", "2026-01-05 03:24:25.678 1075 ERROR keystone pymysql.err.OperationalError: (1193, \"Unknown system variable 'transaction_isolation'\")", "2026-01-05 03:24:25.678 1075 ERROR keystone ", "2026-01-05 03:24:25.678 1075 ERROR keystone The above exception was the direct cause of the following exception:", "2026-01-05 03:24:25.678 1075 ERROR keystone ", "2026-01-05 03:24:25.678 1075 ERROR keystone Traceback (most recent call last):", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/bin/keystone-manage\", line 7, in ", "2026-01-05 03:24:25.678 1075 ERROR keystone sys.exit(main())", "2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/cmd/manage.py\", line 36, in main", "2026-01-05 03:24:25.678 1075 ERROR keystone cli.main(argv=sys.argv, developer_config_file=developer_config)", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/cmd/cli.py\", line 1733, in main", "2026-01-05 03:24:25.678 1075 ERROR keystone CONF.command.cmd_class.main()", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/cmd/cli.py\", line 487, in main", "2026-01-05 03:24:25.678 1075 ERROR keystone upgrades.expand_schema()", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/common/sql/upgrades.py\", line 287, in expand_schema", "2026-01-05 03:24:25.678 1075 ERROR keystone _db_sync(EXPAND_BRANCH, engine=engine)", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/common/sql/upgrades.py\", line 217, in _db_sync", "2026-01-05 03:24:25.678 1075 ERROR keystone with sql.session_for_write() as session:", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/usr/lib/python3.12/contextlib.py\", line 137, in __enter__", "2026-01-05 03:24:25.678 1075 ERROR keystone return next(self.gen)", "2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 1042, in _transaction_scope", "2026-01-05 03:24:25.678 1075 ERROR keystone with current._produce_block(", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/usr/lib/python3.12/contextlib.py\", line 137, in __enter__", "2026-01-05 03:24:25.678 1075 ERROR keystone return next(self.gen)", "2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 641, in _session", "2026-01-05 03:24:25.678 1075 ERROR keystone self.session = self.factory._create_session(", "2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 404, in _create_session", "2026-01-05 03:24:25.678 1075 ERROR keystone self._start()", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 493, in _start", "2026-01-05 03:24:25.678 1075 ERROR keystone self._setup_for_connection(", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 519, in _setup_for_connection", "2026-01-05 03:24:25.678 1075 ERROR keystone engine = engines.create_engine(", "2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/debtcollector/renames.py\", line 41, in decorator", "2026-01-05 03:24:25.678 1075 ERROR keystone return wrapped(*args, **kwargs)", "2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/engines.py\", line 218, in create_engine", "2026-01-05 03:24:25.678 1075 ERROR keystone test_conn = _test_connection(engine, max_retries, retry_interval)", "2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/engines.py\", line 411, in _test_connection", "2026-01-05 03:24:25.678 1075 ERROR keystone return engine.connect()", "2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 3278, in connect", "2026-01-05 03:24:25.678 1075 ERROR keystone return self._connection_cls(self)", "2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 148, in __init__", "2026-01-05 03:24:25.678 1075 ERROR keystone Connection._handle_dbapi_exception_noconnection(", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 2439, in _handle_dbapi_exception_noconnection", "2026-01-05 03:24:25.678 1075 ERROR keystone raise newraise.with_traceback(exc_info[2]) from e", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 146, in __init__", "2026-01-05 03:24:25.678 1075 ERROR keystone self._dbapi_connection = engine.raw_connection()", "2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 3302, in raw_connection", "2026-01-05 03:24:25.678 1075 ERROR keystone return self.pool.connect()", "2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 449, in connect", "2026-01-05 03:24:25.678 1075 ERROR keystone return _ConnectionFairy._checkout(self)", "2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 1263, in _checkout", "2026-01-05 03:24:25.678 1075 ERROR keystone fairy = _ConnectionRecord.checkout(pool)", "2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 712, in checkout", "2026-01-05 03:24:25.678 1075 ERROR keystone rec = pool._do_get()", "2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 179, in _do_get", "2026-01-05 03:24:25.678 1075 ERROR keystone with util.safe_reraise():", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 146, in __exit__", "2026-01-05 03:24:25.678 1075 ERROR keystone raise exc_value.with_traceback(exc_tb)", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 177, in _do_get", "2026-01-05 03:24:25.678 1075 ERROR keystone return self._create_connection()", "2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 390, in _create_connection", "2026-01-05 03:24:25.678 1075 ERROR keystone return _ConnectionRecord(self)", "2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 674, in __init__", "2026-01-05 03:24:25.678 1075 ERROR keystone self.__connect()", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 914, in __connect", "2026-01-05 03:24:25.678 1075 ERROR keystone )._exec_w_sync_on_first_run(self.dbapi_connection, self)", "2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 483, in _exec_w_sync_on_first_run", "2026-01-05 03:24:25.678 1075 ERROR keystone self(*args, **kw)", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 497, in __call__", "2026-01-05 03:24:25.678 1075 ERROR keystone fn(*args, **kw)", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 1912, in go", "2026-01-05 03:24:25.678 1075 ERROR keystone return once_fn(*arg, **kw)", "2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/create.py\", line 749, in first_connect", "2026-01-05 03:24:25.678 1075 ERROR keystone dialect.initialize(c)", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2835, in initialize", "2026-01-05 03:24:25.678 1075 ERROR keystone default.DefaultDialect.initialize(self, connection)", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 532, in initialize", "2026-01-05 03:24:25.678 1075 ERROR keystone self.default_isolation_level = self.get_default_isolation_level(", "2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 583, in get_default_isolation_level", "2026-01-05 03:24:25.678 1075 ERROR keystone return self.get_isolation_level(dbapi_conn)", "2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2540, in get_isolation_level", "2026-01-05 03:24:25.678 1075 ERROR keystone cursor.execute(\"SELECT @@transaction_isolation\")", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 153, in execute", "2026-01-05 03:24:25.678 1075 ERROR keystone result = self._query(query)", "2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 322, in _query", "2026-01-05 03:24:25.678 1075 ERROR keystone conn.query(q)", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 563, in query", "2026-01-05 03:24:25.678 1075 ERROR keystone self._affected_rows = self._read_query_result(unbuffered=unbuffered)", "2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 825, in _read_query_result", "2026-01-05 03:24:25.678 1075 ERROR keystone result.read()", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 1199, in read", "2026-01-05 03:24:25.678 1075 ERROR keystone first_packet = self.connection._read_packet()", "2026-01-05 03:24:25.678 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 775, in _read_packet", "2026-01-05 03:24:25.678 1075 ERROR keystone packet.raise_for_error()", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/protocol.py\", line 219, in raise_for_error", "2026-01-05 03:24:25.678 1075 ERROR keystone err.raise_mysql_exception(self._data)", "2026-01-05 03:24:25.678 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/err.py\", line 150, in raise_mysql_exception", "2026-01-05 03:24:25.678 1075 ERROR keystone raise errorclass(errno, errval)", "2026-01-05 03:24:25.678 1075 ERROR keystone sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (1193, \"Unknown system variable 'transaction_isolation'\")", "2026-01-05 03:24:25.678 1075 ERROR keystone (Background on this error at: https://sqlalche.me/e/20/e3q8)", "2026-01-05 03:24:25.678 1075 ERROR keystone "], "stdout": "Updating certificates in /etc/ssl/certs...\n1 added, 0 removed; done.\nRunning hooks in /etc/ca-certificates/update.d...\ndone.\n", "stdout_lines": ["Updating certificates in /etc/ssl/certs...", "1 added, 0 removed; done.", "Running hooks in /etc/ca-certificates/update.d...", "done."]} 2026-01-05 03:24:28.057560 | orchestrator | 2026-01-05 03:24:28 | INFO  | Task e0db7570-af97-4011-8651-707c5e4ecfab (keystone) was prepared for execution. 2026-01-05 03:24:28.057656 | orchestrator | 2026-01-05 03:24:28 | INFO  | It takes a moment until task e0db7570-af97-4011-8651-707c5e4ecfab (keystone) has been started and output is visible here. 2026-01-05 03:24:43.431093 | orchestrator | [WARNING]: Failure using method (v2_runner_on_failed) in callback plugin 2026-01-05 03:24:43.431298 | orchestrator | (): '65f0876e-2f0f-d565-0a20-000000000013' 2026-01-05 03:24:43.431330 | orchestrator | 2026-01-05 03:24:43.431341 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 03:24:43.431352 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=1  skipped=11  rescued=0 ignored=0 2026-01-05 03:24:43.431365 | orchestrator | testbed-node-1 : ok=15  changed=1  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-05 03:24:43.431376 | orchestrator | testbed-node-2 : ok=16  changed=2  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-05 03:24:43.431386 | orchestrator | 2026-01-05 03:24:43.431397 | orchestrator | 2026-01-05 03:24:43.431407 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 03:24:43.431418 | orchestrator | Monday 05 January 2026 03:24:27 +0000 (0:00:14.290) 0:01:08.225 ******** 2026-01-05 03:24:43.431428 | orchestrator | =============================================================================== 2026-01-05 03:24:43.431438 | orchestrator | keystone : Init keystone database upgrade ------------------------------ 14.29s 2026-01-05 03:24:43.431448 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.60s 2026-01-05 03:24:43.431458 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.32s 2026-01-05 03:24:43.431464 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.74s 2026-01-05 03:24:43.431470 | orchestrator | keystone : Check keystone containers ------------------------------------ 3.52s 2026-01-05 03:24:43.431477 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.35s 2026-01-05 03:24:43.431498 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.28s 2026-01-05 03:24:43.431504 | orchestrator | keystone : Enable log_bin_trust_function_creators function -------------- 2.47s 2026-01-05 03:24:43.431509 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.47s 2026-01-05 03:24:43.431515 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.76s 2026-01-05 03:24:43.431521 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 1.50s 2026-01-05 03:24:43.431547 | orchestrator | keystone : Generate the required cron jobs for the node ----------------- 1.38s 2026-01-05 03:24:43.431553 | orchestrator | keystone : Check if Keystone domain-specific config is supplied --------- 1.19s 2026-01-05 03:24:43.431559 | orchestrator | keystone : include_tasks ------------------------------------------------ 1.13s 2026-01-05 03:24:43.431564 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.13s 2026-01-05 03:24:43.431570 | orchestrator | keystone : Copying over existing policy file ---------------------------- 1.12s 2026-01-05 03:24:43.431576 | orchestrator | service-cert-copy : keystone | Copying over backend internal TLS key ---- 1.01s 2026-01-05 03:24:43.431582 | orchestrator | keystone : include_tasks ------------------------------------------------ 1.01s 2026-01-05 03:24:43.431587 | orchestrator | keystone : Checking whether keystone-paste.ini file exists -------------- 1.00s 2026-01-05 03:24:43.431593 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.99s 2026-01-05 03:24:43.431599 | orchestrator | 2026-01-05 03:24:43.431605 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-05 03:24:43.431610 | orchestrator | 2026-01-05 03:24:43.431616 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-05 03:24:43.431622 | orchestrator | Monday 05 January 2026 03:24:33 +0000 (0:00:01.493) 0:00:01.493 ******** 2026-01-05 03:24:43.431627 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:24:43.431635 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:24:43.431642 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:24:43.431649 | orchestrator | 2026-01-05 03:24:43.431656 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-05 03:24:43.431662 | orchestrator | Monday 05 January 2026 03:24:35 +0000 (0:00:01.696) 0:00:03.189 ******** 2026-01-05 03:24:43.431669 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-01-05 03:24:43.431676 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-01-05 03:24:43.431683 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-01-05 03:24:43.431690 | orchestrator | 2026-01-05 03:24:43.431696 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-01-05 03:24:43.431703 | orchestrator | 2026-01-05 03:24:43.431710 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-05 03:24:43.431717 | orchestrator | Monday 05 January 2026 03:24:37 +0000 (0:00:01.716) 0:00:04.905 ******** 2026-01-05 03:24:43.431725 | orchestrator | included: /ansible/roles/keystone/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 03:24:43.431732 | orchestrator | 2026-01-05 03:24:43.431739 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-01-05 03:24:43.431746 | orchestrator | Monday 05 January 2026 03:24:40 +0000 (0:00:02.943) 0:00:07.849 ******** 2026-01-05 03:24:43.431776 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 03:24:43.431790 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 03:24:43.431803 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 03:24:43.431811 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 03:24:43.431820 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 03:24:43.431832 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 03:24:54.781905 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 03:24:54.782086 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 03:24:54.782101 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 03:24:54.782110 | orchestrator | 2026-01-05 03:24:54.782120 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-01-05 03:24:54.782153 | orchestrator | Monday 05 January 2026 03:24:43 +0000 (0:00:03.057) 0:00:10.907 ******** 2026-01-05 03:24:54.782167 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:24:54.782179 | orchestrator | 2026-01-05 03:24:54.782192 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-01-05 03:24:54.782205 | orchestrator | Monday 05 January 2026 03:24:44 +0000 (0:00:01.145) 0:00:12.053 ******** 2026-01-05 03:24:54.782215 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:24:54.782223 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:24:54.782230 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:24:54.782237 | orchestrator | 2026-01-05 03:24:54.782245 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-01-05 03:24:54.782252 | orchestrator | Monday 05 January 2026 03:24:45 +0000 (0:00:01.396) 0:00:13.449 ******** 2026-01-05 03:24:54.782259 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 03:24:54.782267 | orchestrator | 2026-01-05 03:24:54.782274 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-05 03:24:54.782282 | orchestrator | Monday 05 January 2026 03:24:48 +0000 (0:00:02.241) 0:00:15.691 ******** 2026-01-05 03:24:54.782290 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-05 03:24:54.782297 | orchestrator | 2026-01-05 03:24:54.782304 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-01-05 03:24:54.782312 | orchestrator | Monday 05 January 2026 03:24:50 +0000 (0:00:02.093) 0:00:17.785 ******** 2026-01-05 03:24:54.782322 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 03:24:54.782355 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 03:24:54.782370 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 03:24:54.782379 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 03:24:54.782388 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 03:24:54.782396 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 03:24:54.782415 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 03:24:57.131762 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 03:24:57.131853 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 03:24:57.131860 | orchestrator | 2026-01-05 03:24:57.131866 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-01-05 03:24:57.131871 | orchestrator | Monday 05 January 2026 03:24:54 +0000 (0:00:04.474) 0:00:22.260 ******** 2026-01-05 03:24:57.131878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-05 03:24:57.131884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 03:24:57.131889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 03:24:57.131909 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:24:57.131925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-05 03:24:57.131933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-05 03:24:57.131937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 03:24:57.131941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 03:24:57.131945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 03:24:57.131953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 03:24:57.131957 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:24:57.131963 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:25:03.370499 | orchestrator | 2026-01-05 03:25:03.370605 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-01-05 03:25:03.370618 | orchestrator | Monday 05 January 2026 03:24:57 +0000 (0:00:02.351) 0:00:24.612 ******** 2026-01-05 03:25:03.370664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-05 03:25:03.370676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 03:25:03.370683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 03:25:03.370690 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:25:03.370698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-05 03:25:03.370728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 03:25:03.370752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 03:25:03.370759 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:25:03.370770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-05 03:25:03.370776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 03:25:03.370783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 03:25:03.370795 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:25:03.370802 | orchestrator | 2026-01-05 03:25:03.370809 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-01-05 03:25:03.370816 | orchestrator | Monday 05 January 2026 03:24:58 +0000 (0:00:01.779) 0:00:26.391 ******** 2026-01-05 03:25:03.370823 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 03:25:03.370848 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 03:25:09.714339 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 03:25:09.714423 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 03:25:09.714450 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 03:25:09.714456 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 03:25:09.714461 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 03:25:09.714478 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 03:25:09.714495 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 03:25:09.714501 | orchestrator | 2026-01-05 03:25:09.714507 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-01-05 03:25:09.714512 | orchestrator | Monday 05 January 2026 03:25:03 +0000 (0:00:04.466) 0:00:30.858 ******** 2026-01-05 03:25:09.714517 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 03:25:09.714526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 03:25:09.714531 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 03:25:09.714536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 03:25:09.714546 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 03:25:19.137780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 03:25:19.137858 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 03:25:19.137865 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 03:25:19.137901 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 03:25:19.137907 | orchestrator | 2026-01-05 03:25:19.137912 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-01-05 03:25:19.137918 | orchestrator | Monday 05 January 2026 03:25:09 +0000 (0:00:06.339) 0:00:37.198 ******** 2026-01-05 03:25:19.137922 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:25:19.137927 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:25:19.137930 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:25:19.137934 | orchestrator | 2026-01-05 03:25:19.137938 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-01-05 03:25:19.137942 | orchestrator | Monday 05 January 2026 03:25:12 +0000 (0:00:02.850) 0:00:40.049 ******** 2026-01-05 03:25:19.137946 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:25:19.137950 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:25:19.137954 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:25:19.137958 | orchestrator | 2026-01-05 03:25:19.137962 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-01-05 03:25:19.137968 | orchestrator | Monday 05 January 2026 03:25:14 +0000 (0:00:01.904) 0:00:41.953 ******** 2026-01-05 03:25:19.137972 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:25:19.137976 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:25:19.137980 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:25:19.137983 | orchestrator | 2026-01-05 03:25:19.137987 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-01-05 03:25:19.137991 | orchestrator | Monday 05 January 2026 03:25:15 +0000 (0:00:01.372) 0:00:43.325 ******** 2026-01-05 03:25:19.138011 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:25:19.138085 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:25:19.138095 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:25:19.138101 | orchestrator | 2026-01-05 03:25:19.138107 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-01-05 03:25:19.138127 | orchestrator | Monday 05 January 2026 03:25:17 +0000 (0:00:01.393) 0:00:44.719 ******** 2026-01-05 03:25:19.138154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-05 03:25:19.138161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 03:25:19.138168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 03:25:19.138174 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:25:19.138180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-05 03:25:19.138191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 03:25:19.138209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 03:25:50.979966 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:25:50.980155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-05 03:25:50.980183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-05 03:25:50.980198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-05 03:25:50.980211 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:25:50.980223 | orchestrator | 2026-01-05 03:25:50.980235 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-05 03:25:50.980244 | orchestrator | Monday 05 January 2026 03:25:19 +0000 (0:00:01.896) 0:00:46.616 ******** 2026-01-05 03:25:50.980251 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:25:50.980258 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:25:50.980265 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:25:50.980272 | orchestrator | 2026-01-05 03:25:50.980301 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-01-05 03:25:50.980308 | orchestrator | Monday 05 January 2026 03:25:20 +0000 (0:00:01.354) 0:00:47.970 ******** 2026-01-05 03:25:50.980315 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-05 03:25:50.980323 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-05 03:25:50.980330 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-05 03:25:50.980337 | orchestrator | 2026-01-05 03:25:50.980357 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-01-05 03:25:50.980364 | orchestrator | Monday 05 January 2026 03:25:23 +0000 (0:00:02.750) 0:00:50.721 ******** 2026-01-05 03:25:50.980371 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 03:25:50.980377 | orchestrator | 2026-01-05 03:25:50.980384 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-01-05 03:25:50.980391 | orchestrator | Monday 05 January 2026 03:25:25 +0000 (0:00:02.063) 0:00:52.785 ******** 2026-01-05 03:25:50.980397 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:25:50.980404 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:25:50.980411 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:25:50.980417 | orchestrator | 2026-01-05 03:25:50.980424 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-01-05 03:25:50.980431 | orchestrator | Monday 05 January 2026 03:25:27 +0000 (0:00:01.881) 0:00:54.667 ******** 2026-01-05 03:25:50.980437 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-05 03:25:50.980444 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-05 03:25:50.980451 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-05 03:25:50.980457 | orchestrator | 2026-01-05 03:25:50.980464 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-01-05 03:25:50.980471 | orchestrator | Monday 05 January 2026 03:25:29 +0000 (0:00:02.366) 0:00:57.034 ******** 2026-01-05 03:25:50.980478 | orchestrator | ok: [testbed-node-0] 2026-01-05 03:25:50.980485 | orchestrator | ok: [testbed-node-1] 2026-01-05 03:25:50.980507 | orchestrator | ok: [testbed-node-2] 2026-01-05 03:25:50.980516 | orchestrator | 2026-01-05 03:25:50.980524 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-01-05 03:25:50.980532 | orchestrator | Monday 05 January 2026 03:25:31 +0000 (0:00:01.463) 0:00:58.498 ******** 2026-01-05 03:25:50.980541 | orchestrator | ok: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-05 03:25:50.980548 | orchestrator | ok: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-05 03:25:50.980557 | orchestrator | ok: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-05 03:25:50.980568 | orchestrator | ok: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-05 03:25:50.980585 | orchestrator | ok: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-05 03:25:50.980600 | orchestrator | ok: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-05 03:25:50.980611 | orchestrator | ok: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-05 03:25:50.980622 | orchestrator | ok: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-05 03:25:50.980633 | orchestrator | ok: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-05 03:25:50.980644 | orchestrator | ok: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-05 03:25:50.980654 | orchestrator | ok: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-05 03:25:50.980664 | orchestrator | ok: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-05 03:25:50.980675 | orchestrator | ok: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-05 03:25:50.980696 | orchestrator | ok: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-05 03:25:50.980708 | orchestrator | ok: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-05 03:25:50.980721 | orchestrator | ok: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-05 03:25:50.980732 | orchestrator | ok: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-05 03:25:50.980744 | orchestrator | ok: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-05 03:25:50.980755 | orchestrator | ok: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-05 03:25:50.980764 | orchestrator | ok: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-05 03:25:50.980772 | orchestrator | ok: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-05 03:25:50.980780 | orchestrator | 2026-01-05 03:25:50.980788 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-01-05 03:25:50.980796 | orchestrator | Monday 05 January 2026 03:25:41 +0000 (0:00:10.457) 0:01:08.956 ******** 2026-01-05 03:25:50.980804 | orchestrator | ok: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-05 03:25:50.980812 | orchestrator | ok: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-05 03:25:50.980820 | orchestrator | ok: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-05 03:25:50.980827 | orchestrator | ok: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-05 03:25:50.980835 | orchestrator | ok: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-05 03:25:50.980843 | orchestrator | ok: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-05 03:25:50.980851 | orchestrator | 2026-01-05 03:25:50.980859 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-01-05 03:25:50.980867 | orchestrator | Monday 05 January 2026 03:25:46 +0000 (0:00:04.750) 0:01:13.706 ******** 2026-01-05 03:25:50.980882 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 03:25:50.980900 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 03:26:12.907553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-05 03:26:12.907646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 03:26:12.907672 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 03:26:12.907681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-05 03:26:12.907689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 03:26:12.907697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 03:26:12.907740 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-05 03:26:12.907750 | orchestrator | 2026-01-05 03:26:12.907758 | orchestrator | TASK [keystone : Enable log_bin_trust_function_creators function] ************** 2026-01-05 03:26:12.907766 | orchestrator | Monday 05 January 2026 03:25:50 +0000 (0:00:04.758) 0:01:18.464 ******** 2026-01-05 03:26:12.907774 | orchestrator | changed: [testbed-node-0] 2026-01-05 03:26:12.907782 | orchestrator | 2026-01-05 03:26:12.907788 | orchestrator | TASK [keystone : Init keystone database upgrade] ******************************* 2026-01-05 03:26:12.907795 | orchestrator | Monday 05 January 2026 03:25:54 +0000 (0:00:03.486) 0:01:21.951 ******** 2026-01-05 03:26:12.907802 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:26:12.907809 | orchestrator | skipping: [testbed-node-2] 2026-01-05 03:26:12.907816 | orchestrator | changed: [testbed-node-0] 2026-01-05 03:26:12.907823 | orchestrator | 2026-01-05 03:26:12.907829 | orchestrator | TASK [keystone : Finish keystone database upgrade] ***************************** 2026-01-05 03:26:12.907836 | orchestrator | Monday 05 January 2026 03:25:56 +0000 (0:00:01.602) 0:01:23.554 ******** 2026-01-05 03:26:12.907843 | orchestrator | skipping: [testbed-node-0] 2026-01-05 03:26:12.907850 | orchestrator | skipping: [testbed-node-1] 2026-01-05 03:26:12.907857 | orchestrator | changed: [testbed-node-2] 2026-01-05 03:26:12.907863 | orchestrator | 2026-01-05 03:26:12.907870 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-01-05 03:26:12.907877 | orchestrator | Monday 05 January 2026 03:25:57 +0000 (0:00:01.692) 0:01:25.247 ******** 2026-01-05 03:26:12.907884 | orchestrator | 2026-01-05 03:26:12.907890 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-01-05 03:26:12.907897 | orchestrator | Monday 05 January 2026 03:25:58 +0000 (0:00:00.654) 0:01:25.901 ******** 2026-01-05 03:26:12.907904 | orchestrator | 2026-01-05 03:26:12.907911 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-01-05 03:26:12.907917 | orchestrator | Monday 05 January 2026 03:25:58 +0000 (0:00:00.455) 0:01:26.357 ******** 2026-01-05 03:26:12.907924 | orchestrator | 2026-01-05 03:26:12.907931 | orchestrator | RUNNING HANDLER [keystone : Init keystone database upgrade] ******************** 2026-01-05 03:26:12.907938 | orchestrator | Monday 05 January 2026 03:25:59 +0000 (0:00:00.807) 0:01:27.164 ******** 2026-01-05 03:26:12.908568 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "Container exited with non-zero return code 1", "rc": 1, "stderr": "+ sudo -E kolla_set_configs\nINFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json\nINFO:__main__:Validating config file\nINFO:__main__:Kolla config strategy set to: COPY_ALWAYS\nINFO:__main__:Copying service configuration files\nINFO:__main__:Copying /var/lib/kolla/config_files/keystone-startup.sh to /usr/bin/keystone-startup.sh\nINFO:__main__:Setting permission for /usr/bin/keystone-startup.sh\nINFO:__main__:Copying /var/lib/kolla/config_files/keystone.conf to /etc/keystone/keystone.conf\nINFO:__main__:Setting permission for /etc/keystone/keystone.conf\nINFO:__main__:Copying /var/lib/kolla/config_files/wsgi-keystone.conf to /etc/apache2/conf-enabled/wsgi-keystone.conf\nINFO:__main__:Setting permission for /etc/apache2/conf-enabled/wsgi-keystone.conf\nINFO:__main__:Writing out command to execute\nINFO:__main__:Setting permission for /var/log/kolla\nINFO:__main__:Setting permission for /var/log/kolla/keystone/keystone.log\nINFO:__main__:Setting permission for /etc/keystone/fernet-keys\n++ cat /run_command\n+ CMD=/usr/bin/keystone-startup.sh\n+ ARGS=\n+ sudo kolla_copy_cacerts\nrehash: warning: skipping ca-certificates.crt,it does not contain exactly one certificate or CRL\n+ sudo kolla_install_projects\n+ [[ ! -n '' ]]\n+ . kolla_extend_start\n++ KEYSTONE_LOG_DIR=/var/log/kolla/keystone\n++ [[ ! -d /var/log/kolla/keystone ]]\n+++ stat -c %U:%G /var/log/kolla/keystone\n++ [[ keystone:kolla != \\k\\e\\y\\s\\t\\o\\n\\e\\:\\k\\o\\l\\l\\a ]]\n++ '[' '!' -f /var/log/kolla/keystone/keystone.log ']'\n+++ stat -c %U:%G /var/log/kolla/keystone/keystone.log\n++ [[ keystone:keystone != \\k\\e\\y\\s\\t\\o\\n\\e\\:\\k\\e\\y\\s\\t\\o\\n\\e ]]\n+++ stat -c %a /var/log/kolla/keystone\n++ [[ 2755 != \\7\\5\\5 ]]\n++ chmod 755 /var/log/kolla/keystone\n++ EXTRA_KEYSTONE_MANAGE_ARGS=\n++ [[ -n 0 ]]\n++ sudo -H -u keystone keystone-manage db_sync --expand\n2026-01-05 03:26:11.762 1075 DEBUG oslo_db.sqlalchemy.engines [-] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION _check_effective_sql_mode /var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/engines.py:342\n2026-01-05 03:26:11.772 1075 CRITICAL keystone [-] Unhandled error: sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (1193, \"Unknown system variable 'transaction_isolation'\")\n(Background on this error at: https://sqlalche.me/e/20/e3q8)\n2026-01-05 03:26:11.772 1075 ERROR keystone Traceback (most recent call last):\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 146, in __init__\n2026-01-05 03:26:11.772 1075 ERROR keystone self._dbapi_connection = engine.raw_connection()\n2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 3302, in raw_connection\n2026-01-05 03:26:11.772 1075 ERROR keystone return self.pool.connect()\n2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 449, in connect\n2026-01-05 03:26:11.772 1075 ERROR keystone return _ConnectionFairy._checkout(self)\n2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 1263, in _checkout\n2026-01-05 03:26:11.772 1075 ERROR keystone fairy = _ConnectionRecord.checkout(pool)\n2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 712, in checkout\n2026-01-05 03:26:11.772 1075 ERROR keystone rec = pool._do_get()\n2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 179, in _do_get\n2026-01-05 03:26:11.772 1075 ERROR keystone with util.safe_reraise():\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 146, in __exit__\n2026-01-05 03:26:11.772 1075 ERROR keystone raise exc_value.with_traceback(exc_tb)\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 177, in _do_get\n2026-01-05 03:26:11.772 1075 ERROR keystone return self._create_connection()\n2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 390, in _create_connection\n2026-01-05 03:26:11.772 1075 ERROR keystone return _ConnectionRecord(self)\n2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 674, in __init__\n2026-01-05 03:26:11.772 1075 ERROR keystone self.__connect()\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 914, in __connect\n2026-01-05 03:26:11.772 1075 ERROR keystone )._exec_w_sync_on_first_run(self.dbapi_connection, self)\n2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 483, in _exec_w_sync_on_first_run\n2026-01-05 03:26:11.772 1075 ERROR keystone self(*args, **kw)\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 497, in __call__\n2026-01-05 03:26:11.772 1075 ERROR keystone fn(*args, **kw)\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 1912, in go\n2026-01-05 03:26:11.772 1075 ERROR keystone return once_fn(*arg, **kw)\n2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/create.py\", line 749, in first_connect\n2026-01-05 03:26:11.772 1075 ERROR keystone dialect.initialize(c)\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2835, in initialize\n2026-01-05 03:26:11.772 1075 ERROR keystone default.DefaultDialect.initialize(self, connection)\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 532, in initialize\n2026-01-05 03:26:11.772 1075 ERROR keystone self.default_isolation_level = self.get_default_isolation_level(\n2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 583, in get_default_isolation_level\n2026-01-05 03:26:11.772 1075 ERROR keystone return self.get_isolation_level(dbapi_conn)\n2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2540, in get_isolation_level\n2026-01-05 03:26:11.772 1075 ERROR keystone cursor.execute(\"SELECT @@transaction_isolation\")\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 153, in execute\n2026-01-05 03:26:11.772 1075 ERROR keystone result = self._query(query)\n2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 322, in _query\n2026-01-05 03:26:11.772 1075 ERROR keystone conn.query(q)\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 563, in query\n2026-01-05 03:26:11.772 1075 ERROR keystone self._affected_rows = self._read_query_result(unbuffered=unbuffered)\n2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 825, in _read_query_result\n2026-01-05 03:26:11.772 1075 ERROR keystone result.read()\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 1199, in read\n2026-01-05 03:26:11.772 1075 ERROR keystone first_packet = self.connection._read_packet()\n2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 775, in _read_packet\n2026-01-05 03:26:11.772 1075 ERROR keystone packet.raise_for_error()\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/protocol.py\", line 219, in raise_for_error\n2026-01-05 03:26:11.772 1075 ERROR keystone err.raise_mysql_exception(self._data)\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/err.py\", line 150, in raise_mysql_exception\n2026-01-05 03:26:11.772 1075 ERROR keystone raise errorclass(errno, errval)\n2026-01-05 03:26:11.772 1075 ERROR keystone pymysql.err.OperationalError: (1193, \"Unknown system variable 'transaction_isolation'\")\n2026-01-05 03:26:11.772 1075 ERROR keystone \n2026-01-05 03:26:11.772 1075 ERROR keystone The above exception was the direct cause of the following exception:\n2026-01-05 03:26:11.772 1075 ERROR keystone \n2026-01-05 03:26:11.772 1075 ERROR keystone Traceback (most recent call last):\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/bin/keystone-manage\", line 7, in \n2026-01-05 03:26:11.772 1075 ERROR keystone sys.exit(main())\n2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/cmd/manage.py\", line 36, in main\n2026-01-05 03:26:11.772 1075 ERROR keystone cli.main(argv=sys.argv, developer_config_file=developer_config)\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/cmd/cli.py\", line 1733, in main\n2026-01-05 03:26:11.772 1075 ERROR keystone CONF.command.cmd_class.main()\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/cmd/cli.py\", line 487, in main\n2026-01-05 03:26:11.772 1075 ERROR keystone upgrades.expand_schema()\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/common/sql/upgrades.py\", line 287, in expand_schema\n2026-01-05 03:26:11.772 1075 ERROR keystone _db_sync(EXPAND_BRANCH, engine=engine)\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/common/sql/upgrades.py\", line 217, in _db_sync\n2026-01-05 03:26:11.772 1075 ERROR keystone with sql.session_for_write() as session:\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/usr/lib/python3.12/contextlib.py\", line 137, in __enter__\n2026-01-05 03:26:11.772 1075 ERROR keystone return next(self.gen)\n2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 1042, in _transaction_scope\n2026-01-05 03:26:11.772 1075 ERROR keystone with current._produce_block(\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/usr/lib/python3.12/contextlib.py\", line 137, in __enter__\n2026-01-05 03:26:11.772 1075 ERROR keystone return next(self.gen)\n2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 641, in _session\n2026-01-05 03:26:11.772 1075 ERROR keystone self.session = self.factory._create_session(\n2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 404, in _create_session\n2026-01-05 03:26:11.772 1075 ERROR keystone self._start()\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 493, in _start\n2026-01-05 03:26:11.772 1075 ERROR keystone self._setup_for_connection(\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 519, in _setup_for_connection\n2026-01-05 03:26:11.772 1075 ERROR keystone engine = engines.create_engine(\n2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/debtcollector/renames.py\", line 41, in decorator\n2026-01-05 03:26:11.772 1075 ERROR keystone return wrapped(*args, **kwargs)\n2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/engines.py\", line 218, in create_engine\n2026-01-05 03:26:11.772 1075 ERROR keystone test_conn = _test_connection(engine, max_retries, retry_interval)\n2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/engines.py\", line 411, in _test_connection\n2026-01-05 03:26:11.772 1075 ERROR keystone return engine.connect()\n2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 3278, in connect\n2026-01-05 03:26:11.772 1075 ERROR keystone return self._connection_cls(self)\n2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 148, in __init__\n2026-01-05 03:26:11.772 1075 ERROR keystone Connection._handle_dbapi_exception_noconnection(\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 2439, in _handle_dbapi_exception_noconnection\n2026-01-05 03:26:11.772 1075 ERROR keystone raise newraise.with_traceback(exc_info[2]) from e\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 146, in __init__\n2026-01-05 03:26:11.772 1075 ERROR keystone self._dbapi_connection = engine.raw_connection()\n2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 3302, in raw_connection\n2026-01-05 03:26:11.772 1075 ERROR keystone return self.pool.connect()\n2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 449, in connect\n2026-01-05 03:26:11.772 1075 ERROR keystone return _ConnectionFairy._checkout(self)\n2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 1263, in _checkout\n2026-01-05 03:26:11.772 1075 ERROR keystone fairy = _ConnectionRecord.checkout(pool)\n2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 712, in checkout\n2026-01-05 03:26:11.772 1075 ERROR keystone rec = pool._do_get()\n2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 179, in _do_get\n2026-01-05 03:26:11.772 1075 ERROR keystone with util.safe_reraise():\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 146, in __exit__\n2026-01-05 03:26:11.772 1075 ERROR keystone raise exc_value.with_traceback(exc_tb)\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 177, in _do_get\n2026-01-05 03:26:11.772 1075 ERROR keystone return self._create_connection()\n2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 390, in _create_connection\n2026-01-05 03:26:11.772 1075 ERROR keystone return _ConnectionRecord(self)\n2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 674, in __init__\n2026-01-05 03:26:11.772 1075 ERROR keystone self.__connect()\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 914, in __connect\n2026-01-05 03:26:11.772 1075 ERROR keystone )._exec_w_sync_on_first_run(self.dbapi_connection, self)\n2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 483, in _exec_w_sync_on_first_run\n2026-01-05 03:26:11.772 1075 ERROR keystone self(*args, **kw)\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 497, in __call__\n2026-01-05 03:26:11.772 1075 ERROR keystone fn(*args, **kw)\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 1912, in go\n2026-01-05 03:26:11.772 1075 ERROR keystone return once_fn(*arg, **kw)\n2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/create.py\", line 749, in first_connect\n2026-01-05 03:26:11.772 1075 ERROR keystone dialect.initialize(c)\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2835, in initialize\n2026-01-05 03:26:11.772 1075 ERROR keystone default.DefaultDialect.initialize(self, connection)\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 532, in initialize\n2026-01-05 03:26:11.772 1075 ERROR keystone self.default_isolation_level = self.get_default_isolation_level(\n2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 583, in get_default_isolation_level\n2026-01-05 03:26:11.772 1075 ERROR keystone return self.get_isolation_level(dbapi_conn)\n2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2540, in get_isolation_level\n2026-01-05 03:26:11.772 1075 ERROR keystone cursor.execute(\"SELECT @@transaction_isolation\")\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 153, in execute\n2026-01-05 03:26:11.772 1075 ERROR keystone result = self._query(query)\n2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 322, in _query\n2026-01-05 03:26:11.772 1075 ERROR keystone conn.query(q)\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 563, in query\n2026-01-05 03:26:11.772 1075 ERROR keystone self._affected_rows = self._read_query_result(unbuffered=unbuffered)\n2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 825, in _read_query_result\n2026-01-05 03:26:11.772 1075 ERROR keystone result.read()\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 1199, in read\n2026-01-05 03:26:11.772 1075 ERROR keystone first_packet = self.connection._read_packet()\n2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 775, in _read_packet\n2026-01-05 03:26:11.772 1075 ERROR keystone packet.raise_for_error()\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/protocol.py\", line 219, in raise_for_error\n2026-01-05 03:26:11.772 1075 ERROR keystone err.raise_mysql_exception(self._data)\n2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/err.py\", line 150, in raise_mysql_exception\n2026-01-05 03:26:11.772 1075 ERROR keystone raise errorclass(errno, errval)\n2026-01-05 03:26:11.772 1075 ERROR keystone sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (1193, \"Unknown system variable 'transaction_isolation'\")\n2026-01-05 03:26:11.772 1075 ERROR keystone (Background on this error at: https://sqlalche.me/e/20/e3q8)\n2026-01-05 03:26:11.772 1075 ERROR keystone \n", "stderr_lines": ["+ sudo -E kolla_set_configs", "INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json", "INFO:__main__:Validating config file", "INFO:__main__:Kolla config strategy set to: COPY_ALWAYS", "INFO:__main__:Copying service configuration files", "INFO:__main__:Copying /var/lib/kolla/config_files/keystone-startup.sh to /usr/bin/keystone-startup.sh", "INFO:__main__:Setting permission for /usr/bin/keystone-startup.sh", "INFO:__main__:Copying /var/lib/kolla/config_files/keystone.conf to /etc/keystone/keystone.conf", "INFO:__main__:Setting permission for /etc/keystone/keystone.conf", "INFO:__main__:Copying /var/lib/kolla/config_files/wsgi-keystone.conf to /etc/apache2/conf-enabled/wsgi-keystone.conf", "INFO:__main__:Setting permission for /etc/apache2/conf-enabled/wsgi-keystone.conf", "INFO:__main__:Writing out command to execute", "INFO:__main__:Setting permission for /var/log/kolla", "INFO:__main__:Setting permission for /var/log/kolla/keystone/keystone.log", "INFO:__main__:Setting permission for /etc/keystone/fernet-keys", "++ cat /run_command", "+ CMD=/usr/bin/keystone-startup.sh", "+ ARGS=", "+ sudo kolla_copy_cacerts", "rehash: warning: skipping ca-certificates.crt,it does not contain exactly one certificate or CRL", "+ sudo kolla_install_projects", "+ [[ ! -n '' ]]", "+ . kolla_extend_start", "++ KEYSTONE_LOG_DIR=/var/log/kolla/keystone", "++ [[ ! -d /var/log/kolla/keystone ]]", "+++ stat -c %U:%G /var/log/kolla/keystone", "++ [[ keystone:kolla != \\k\\e\\y\\s\\t\\o\\n\\e\\:\\k\\o\\l\\l\\a ]]", "++ '[' '!' -f /var/log/kolla/keystone/keystone.log ']'", "+++ stat -c %U:%G /var/log/kolla/keystone/keystone.log", "++ [[ keystone:keystone != \\k\\e\\y\\s\\t\\o\\n\\e\\:\\k\\e\\y\\s\\t\\o\\n\\e ]]", "+++ stat -c %a /var/log/kolla/keystone", "++ [[ 2755 != \\7\\5\\5 ]]", "++ chmod 755 /var/log/kolla/keystone", "++ EXTRA_KEYSTONE_MANAGE_ARGS=", "++ [[ -n 0 ]]", "++ sudo -H -u keystone keystone-manage db_sync --expand", "2026-01-05 03:26:11.762 1075 DEBUG oslo_db.sqlalchemy.engines [-] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION _check_effective_sql_mode /var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/engines.py:342", "2026-01-05 03:26:11.772 1075 CRITICAL keystone [-] Unhandled error: sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (1193, \"Unknown system variable 'transaction_isolation'\")", "(Background on this error at: https://sqlalche.me/e/20/e3q8)", "2026-01-05 03:26:11.772 1075 ERROR keystone Traceback (most recent call last):", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 146, in __init__", "2026-01-05 03:26:11.772 1075 ERROR keystone self._dbapi_connection = engine.raw_connection()", "2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 3302, in raw_connection", "2026-01-05 03:26:11.772 1075 ERROR keystone return self.pool.connect()", "2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 449, in connect", "2026-01-05 03:26:11.772 1075 ERROR keystone return _ConnectionFairy._checkout(self)", "2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 1263, in _checkout", "2026-01-05 03:26:11.772 1075 ERROR keystone fairy = _ConnectionRecord.checkout(pool)", "2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 712, in checkout", "2026-01-05 03:26:11.772 1075 ERROR keystone rec = pool._do_get()", "2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 179, in _do_get", "2026-01-05 03:26:11.772 1075 ERROR keystone with util.safe_reraise():", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 146, in __exit__", "2026-01-05 03:26:11.772 1075 ERROR keystone raise exc_value.with_traceback(exc_tb)", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 177, in _do_get", "2026-01-05 03:26:11.772 1075 ERROR keystone return self._create_connection()", "2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 390, in _create_connection", "2026-01-05 03:26:11.772 1075 ERROR keystone return _ConnectionRecord(self)", "2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 674, in __init__", "2026-01-05 03:26:11.772 1075 ERROR keystone self.__connect()", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 914, in __connect", "2026-01-05 03:26:11.772 1075 ERROR keystone )._exec_w_sync_on_first_run(self.dbapi_connection, self)", "2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 483, in _exec_w_sync_on_first_run", "2026-01-05 03:26:11.772 1075 ERROR keystone self(*args, **kw)", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 497, in __call__", "2026-01-05 03:26:11.772 1075 ERROR keystone fn(*args, **kw)", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 1912, in go", "2026-01-05 03:26:11.772 1075 ERROR keystone return once_fn(*arg, **kw)", "2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/create.py\", line 749, in first_connect", "2026-01-05 03:26:11.772 1075 ERROR keystone dialect.initialize(c)", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2835, in initialize", "2026-01-05 03:26:11.772 1075 ERROR keystone default.DefaultDialect.initialize(self, connection)", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 532, in initialize", "2026-01-05 03:26:11.772 1075 ERROR keystone self.default_isolation_level = self.get_default_isolation_level(", "2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 583, in get_default_isolation_level", "2026-01-05 03:26:11.772 1075 ERROR keystone return self.get_isolation_level(dbapi_conn)", "2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2540, in get_isolation_level", "2026-01-05 03:26:11.772 1075 ERROR keystone cursor.execute(\"SELECT @@transaction_isolation\")", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 153, in execute", "2026-01-05 03:26:11.772 1075 ERROR keystone result = self._query(query)", "2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 322, in _query", "2026-01-05 03:26:11.772 1075 ERROR keystone conn.query(q)", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 563, in query", "2026-01-05 03:26:11.772 1075 ERROR keystone self._affected_rows = self._read_query_result(unbuffered=unbuffered)", "2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 825, in _read_query_result", "2026-01-05 03:26:11.772 1075 ERROR keystone result.read()", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 1199, in read", "2026-01-05 03:26:11.772 1075 ERROR keystone first_packet = self.connection._read_packet()", "2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 775, in _read_packet", "2026-01-05 03:26:11.772 1075 ERROR keystone packet.raise_for_error()", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/protocol.py\", line 219, in raise_for_error", "2026-01-05 03:26:11.772 1075 ERROR keystone err.raise_mysql_exception(self._data)", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/err.py\", line 150, in raise_mysql_exception", "2026-01-05 03:26:11.772 1075 ERROR keystone raise errorclass(errno, errval)", "2026-01-05 03:26:11.772 1075 ERROR keystone pymysql.err.OperationalError: (1193, \"Unknown system variable 'transaction_isolation'\")", "2026-01-05 03:26:11.772 1075 ERROR keystone ", "2026-01-05 03:26:11.772 1075 ERROR keystone The above exception was the direct cause of the following exception:", "2026-01-05 03:26:11.772 1075 ERROR keystone ", "2026-01-05 03:26:11.772 1075 ERROR keystone Traceback (most recent call last):", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/bin/keystone-manage\", line 7, in ", "2026-01-05 03:26:11.772 1075 ERROR keystone sys.exit(main())", "2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/cmd/manage.py\", line 36, in main", "2026-01-05 03:26:11.772 1075 ERROR keystone cli.main(argv=sys.argv, developer_config_file=developer_config)", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/cmd/cli.py\", line 1733, in main", "2026-01-05 03:26:11.772 1075 ERROR keystone CONF.command.cmd_class.main()", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/cmd/cli.py\", line 487, in main", "2026-01-05 03:26:11.772 1075 ERROR keystone upgrades.expand_schema()", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/common/sql/upgrades.py\", line 287, in expand_schema", "2026-01-05 03:26:11.772 1075 ERROR keystone _db_sync(EXPAND_BRANCH, engine=engine)", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/common/sql/upgrades.py\", line 217, in _db_sync", "2026-01-05 03:26:11.772 1075 ERROR keystone with sql.session_for_write() as session:", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/usr/lib/python3.12/contextlib.py\", line 137, in __enter__", "2026-01-05 03:26:11.772 1075 ERROR keystone return next(self.gen)", "2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 1042, in _transaction_scope", "2026-01-05 03:26:11.772 1075 ERROR keystone with current._produce_block(", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/usr/lib/python3.12/contextlib.py\", line 137, in __enter__", "2026-01-05 03:26:11.772 1075 ERROR keystone return next(self.gen)", "2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 641, in _session", "2026-01-05 03:26:11.772 1075 ERROR keystone self.session = self.factory._create_session(", "2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 404, in _create_session", "2026-01-05 03:26:11.772 1075 ERROR keystone self._start()", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 493, in _start", "2026-01-05 03:26:11.772 1075 ERROR keystone self._setup_for_connection(", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 519, in _setup_for_connection", "2026-01-05 03:26:11.772 1075 ERROR keystone engine = engines.create_engine(", "2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/debtcollector/renames.py\", line 41, in decorator", "2026-01-05 03:26:11.772 1075 ERROR keystone return wrapped(*args, **kwargs)", "2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/engines.py\", line 218, in create_engine", "2026-01-05 03:26:11.772 1075 ERROR keystone test_conn = _test_connection(engine, max_retries, retry_interval)", "2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/engines.py\", line 411, in _test_connection", "2026-01-05 03:26:11.772 1075 ERROR keystone return engine.connect()", "2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 3278, in connect", "2026-01-05 03:26:11.772 1075 ERROR keystone return self._connection_cls(self)", "2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 148, in __init__", "2026-01-05 03:26:11.772 1075 ERROR keystone Connection._handle_dbapi_exception_noconnection(", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 2439, in _handle_dbapi_exception_noconnection", "2026-01-05 03:26:11.772 1075 ERROR keystone raise newraise.with_traceback(exc_info[2]) from e", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 146, in __init__", "2026-01-05 03:26:11.772 1075 ERROR keystone self._dbapi_connection = engine.raw_connection()", "2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 3302, in raw_connection", "2026-01-05 03:26:11.772 1075 ERROR keystone return self.pool.connect()", "2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 449, in connect", "2026-01-05 03:26:11.772 1075 ERROR keystone return _ConnectionFairy._checkout(self)", "2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 1263, in _checkout", "2026-01-05 03:26:11.772 1075 ERROR keystone fairy = _ConnectionRecord.checkout(pool)", "2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 712, in checkout", "2026-01-05 03:26:11.772 1075 ERROR keystone rec = pool._do_get()", "2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 179, in _do_get", "2026-01-05 03:26:11.772 1075 ERROR keystone with util.safe_reraise():", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 146, in __exit__", "2026-01-05 03:26:11.772 1075 ERROR keystone raise exc_value.with_traceback(exc_tb)", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 177, in _do_get", "2026-01-05 03:26:11.772 1075 ERROR keystone return self._create_connection()", "2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 390, in _create_connection", "2026-01-05 03:26:11.772 1075 ERROR keystone return _ConnectionRecord(self)", "2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 674, in __init__", "2026-01-05 03:26:11.772 1075 ERROR keystone self.__connect()", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 914, in __connect", "2026-01-05 03:26:11.772 1075 ERROR keystone )._exec_w_sync_on_first_run(self.dbapi_connection, self)", "2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 483, in _exec_w_sync_on_first_run", "2026-01-05 03:26:11.772 1075 ERROR keystone self(*args, **kw)", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 497, in __call__", "2026-01-05 03:26:11.772 1075 ERROR keystone fn(*args, **kw)", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 1912, in go", "2026-01-05 03:26:11.772 1075 ERROR keystone return once_fn(*arg, **kw)", "2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/create.py\", line 749, in first_connect", "2026-01-05 03:26:11.772 1075 ERROR keystone dialect.initialize(c)", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2835, in initialize", "2026-01-05 03:26:11.772 1075 ERROR keystone default.DefaultDialect.initialize(self, connection)", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 532, in initialize", "2026-01-05 03:26:11.772 1075 ERROR keystone self.default_isolation_level = self.get_default_isolation_level(", "2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 583, in get_default_isolation_level", "2026-01-05 03:26:11.772 1075 ERROR keystone return self.get_isolation_level(dbapi_conn)", "2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2540, in get_isolation_level", "2026-01-05 03:26:11.772 1075 ERROR keystone cursor.execute(\"SELECT @@transaction_isolation\")", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 153, in execute", "2026-01-05 03:26:11.772 1075 ERROR keystone result = self._query(query)", "2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 322, in _query", "2026-01-05 03:26:11.772 1075 ERROR keystone conn.query(q)", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 563, in query", "2026-01-05 03:26:11.772 1075 ERROR keystone self._affected_rows = self._read_query_result(unbuffered=unbuffered)", "2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 825, in _read_query_result", "2026-01-05 03:26:11.772 1075 ERROR keystone result.read()", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 1199, in read", "2026-01-05 03:26:11.772 1075 ERROR keystone first_packet = self.connection._read_packet()", "2026-01-05 03:26:11.772 1075 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 775, in _read_packet", "2026-01-05 03:26:11.772 1075 ERROR keystone packet.raise_for_error()", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/protocol.py\", line 219, in raise_for_error", "2026-01-05 03:26:11.772 1075 ERROR keystone err.raise_mysql_exception(self._data)", "2026-01-05 03:26:11.772 1075 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/err.py\", line 150, in raise_mysql_exception", "2026-01-05 03:26:11.772 1075 ERROR keystone raise errorclass(errno, errval)", "2026-01-05 03:26:11.772 1075 ERROR keystone sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (1193, \"Unknown system variable 'transaction_isolation'\")", "2026-01-05 03:26:11.772 1075 ERROR keystone (Background on this error at: https://sqlalche.me/e/20/e3q8)", "2026-01-05 03:26:11.772 1075 ERROR keystone "], "stdout": "Updating certificates in /etc/ssl/certs...\n1 added, 0 removed; done.\nRunning hooks in /etc/ca-certificates/update.d...\ndone.\n", "stdout_lines": ["Updating certificates in /etc/ssl/certs...", "1 added, 0 removed; done.", "Running hooks in /etc/ca-certificates/update.d...", "done."]} 2026-01-05 03:26:15.814191 | orchestrator | 2026-01-05 03:26:15.814275 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-05 03:26:15.814286 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=1  skipped=11  rescued=0 ignored=0 2026-01-05 03:26:15.814313 | orchestrator | testbed-node-1 : ok=15  changed=1  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-05 03:26:15.814319 | orchestrator | testbed-node-2 : ok=16  changed=2  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-05 03:26:15.814324 | orchestrator | 2026-01-05 03:26:15.814328 | orchestrator | 2026-01-05 03:26:15.814332 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-05 03:26:15.814354 | orchestrator | Monday 05 January 2026 03:26:15 +0000 (0:00:15.656) 0:01:42.821 ******** 2026-01-05 03:26:15.814358 | orchestrator | =============================================================================== 2026-01-05 03:26:15.814362 | orchestrator | keystone : Init keystone database upgrade ------------------------------ 15.66s 2026-01-05 03:26:15.814366 | orchestrator | keystone : Copying files for keystone-fernet --------------------------- 10.46s 2026-01-05 03:26:15.814369 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 6.34s 2026-01-05 03:26:15.814373 | orchestrator | keystone : Check keystone containers ------------------------------------ 4.76s 2026-01-05 03:26:15.814389 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 4.75s 2026-01-05 03:26:15.814393 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 4.47s 2026-01-05 03:26:15.814397 | orchestrator | keystone : Copying over config.json files for services ------------------ 4.47s 2026-01-05 03:26:15.814407 | orchestrator | keystone : Enable log_bin_trust_function_creators function -------------- 3.49s 2026-01-05 03:26:15.814411 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 3.06s 2026-01-05 03:26:15.814414 | orchestrator | keystone : include_tasks ------------------------------------------------ 2.94s 2026-01-05 03:26:15.814418 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 2.85s 2026-01-05 03:26:15.814422 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 2.75s 2026-01-05 03:26:15.814426 | orchestrator | keystone : Generate the required cron jobs for the node ----------------- 2.37s 2026-01-05 03:26:15.814431 | orchestrator | service-cert-copy : keystone | Copying over backend internal TLS certificate --- 2.35s 2026-01-05 03:26:15.814438 | orchestrator | keystone : Check if Keystone domain-specific config is supplied --------- 2.24s 2026-01-05 03:26:15.814447 | orchestrator | keystone : include_tasks ------------------------------------------------ 2.09s 2026-01-05 03:26:15.814456 | orchestrator | keystone : Checking whether keystone-paste.ini file exists -------------- 2.06s 2026-01-05 03:26:15.814462 | orchestrator | keystone : Flush handlers ----------------------------------------------- 1.92s 2026-01-05 03:26:15.814468 | orchestrator | keystone : Create Keystone domain-specific config directory ------------- 1.90s 2026-01-05 03:26:15.814474 | orchestrator | keystone : Copying over existing policy file ---------------------------- 1.90s 2026-01-05 03:26:16.407998 | orchestrator | ERROR 2026-01-05 03:26:16.408562 | orchestrator | { 2026-01-05 03:26:16.408696 | orchestrator | "delta": "0:43:28.275946", 2026-01-05 03:26:16.408768 | orchestrator | "end": "2026-01-05 03:26:16.178170", 2026-01-05 03:26:16.408830 | orchestrator | "msg": "non-zero return code", 2026-01-05 03:26:16.408885 | orchestrator | "rc": 2, 2026-01-05 03:26:16.408938 | orchestrator | "start": "2026-01-05 02:42:47.902224" 2026-01-05 03:26:16.409025 | orchestrator | } failure 2026-01-05 03:26:16.508905 | 2026-01-05 03:26:16.509064 | PLAY RECAP 2026-01-05 03:26:16.509137 | orchestrator | ok: 30 changed: 11 unreachable: 0 failed: 1 skipped: 6 rescued: 0 ignored: 0 2026-01-05 03:26:16.509168 | 2026-01-05 03:26:16.822622 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/update-stable.yml@main] 2026-01-05 03:26:16.831341 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-01-05 03:26:17.624683 | 2026-01-05 03:26:17.624859 | PLAY [Post output play] 2026-01-05 03:26:17.643711 | 2026-01-05 03:26:17.643942 | LOOP [stage-output : Register sources] 2026-01-05 03:26:17.714659 | 2026-01-05 03:26:17.715021 | TASK [stage-output : Check sudo] 2026-01-05 03:26:18.595950 | orchestrator | sudo: a password is required 2026-01-05 03:26:18.755349 | orchestrator | ok: Runtime: 0:00:00.011218 2026-01-05 03:26:18.762686 | 2026-01-05 03:26:18.762825 | LOOP [stage-output : Set source and destination for files and folders] 2026-01-05 03:26:18.791168 | 2026-01-05 03:26:18.791436 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-01-05 03:26:18.860018 | orchestrator | ok 2026-01-05 03:26:18.868654 | 2026-01-05 03:26:18.868800 | LOOP [stage-output : Ensure target folders exist] 2026-01-05 03:26:19.418666 | orchestrator | ok: "docs" 2026-01-05 03:26:19.419010 | 2026-01-05 03:26:19.776479 | orchestrator | ok: "artifacts" 2026-01-05 03:26:20.068283 | orchestrator | ok: "logs" 2026-01-05 03:26:20.095167 | 2026-01-05 03:26:20.095417 | LOOP [stage-output : Copy files and folders to staging folder] 2026-01-05 03:26:20.133730 | 2026-01-05 03:26:20.134083 | TASK [stage-output : Make all log files readable] 2026-01-05 03:26:20.493310 | orchestrator | ok 2026-01-05 03:26:20.500403 | 2026-01-05 03:26:20.500541 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-01-05 03:26:20.545781 | orchestrator | skipping: Conditional result was False 2026-01-05 03:26:20.565374 | 2026-01-05 03:26:20.565541 | TASK [stage-output : Discover log files for compression] 2026-01-05 03:26:20.590679 | orchestrator | skipping: Conditional result was False 2026-01-05 03:26:20.612818 | 2026-01-05 03:26:20.613143 | LOOP [stage-output : Archive everything from logs] 2026-01-05 03:26:20.666065 | 2026-01-05 03:26:20.666308 | PLAY [Post cleanup play] 2026-01-05 03:26:20.677439 | 2026-01-05 03:26:20.677569 | TASK [Set cloud fact (Zuul deployment)] 2026-01-05 03:26:20.740204 | orchestrator | ok 2026-01-05 03:26:20.749721 | 2026-01-05 03:26:20.749858 | TASK [Set cloud fact (local deployment)] 2026-01-05 03:26:20.777514 | orchestrator | skipping: Conditional result was False 2026-01-05 03:26:20.798127 | 2026-01-05 03:26:20.798310 | TASK [Clean the cloud environment] 2026-01-05 03:26:21.558693 | orchestrator | 2026-01-05 03:26:21 - clean up servers 2026-01-05 03:26:22.628145 | orchestrator | 2026-01-05 03:26:22 - testbed-manager 2026-01-05 03:26:22.711367 | orchestrator | 2026-01-05 03:26:22 - testbed-node-2 2026-01-05 03:26:22.798141 | orchestrator | 2026-01-05 03:26:22 - testbed-node-1 2026-01-05 03:26:22.885642 | orchestrator | 2026-01-05 03:26:22 - testbed-node-5 2026-01-05 03:26:22.973749 | orchestrator | 2026-01-05 03:26:22 - testbed-node-4 2026-01-05 03:26:23.060514 | orchestrator | 2026-01-05 03:26:23 - testbed-node-3 2026-01-05 03:26:23.164232 | orchestrator | 2026-01-05 03:26:23 - testbed-node-0 2026-01-05 03:26:23.255737 | orchestrator | 2026-01-05 03:26:23 - clean up keypairs 2026-01-05 03:26:23.275313 | orchestrator | 2026-01-05 03:26:23 - testbed 2026-01-05 03:26:23.301119 | orchestrator | 2026-01-05 03:26:23 - wait for servers to be gone 2026-01-05 03:26:34.240340 | orchestrator | 2026-01-05 03:26:34 - clean up ports 2026-01-05 03:26:34.431912 | orchestrator | 2026-01-05 03:26:34 - 3045e1f0-8b4f-40d0-a3f0-13651ee6a259 2026-01-05 03:26:34.712461 | orchestrator | 2026-01-05 03:26:34 - 6e2aeb4b-e5db-41a7-8b42-b41478428ccf 2026-01-05 03:26:34.981173 | orchestrator | 2026-01-05 03:26:34 - 9020f612-3ec1-4b0d-b904-00f85a154b40 2026-01-05 03:26:35.181382 | orchestrator | 2026-01-05 03:26:35 - 9d818deb-906d-4bfe-8a59-9302ef8b7e5e 2026-01-05 03:26:35.382194 | orchestrator | 2026-01-05 03:26:35 - a1ba6642-324c-44d6-9fc3-ce7e68d34321 2026-01-05 03:26:35.599775 | orchestrator | 2026-01-05 03:26:35 - b01c9de4-c15f-4022-b400-188e51775351 2026-01-05 03:26:35.990148 | orchestrator | 2026-01-05 03:26:35 - e46a3318-4bb3-4be9-9553-aa08330510d2 2026-01-05 03:26:36.209537 | orchestrator | 2026-01-05 03:26:36 - clean up volumes 2026-01-05 03:26:36.343431 | orchestrator | 2026-01-05 03:26:36 - testbed-volume-1-node-base 2026-01-05 03:26:36.381333 | orchestrator | 2026-01-05 03:26:36 - testbed-volume-5-node-base 2026-01-05 03:26:36.419277 | orchestrator | 2026-01-05 03:26:36 - testbed-volume-2-node-base 2026-01-05 03:26:36.457194 | orchestrator | 2026-01-05 03:26:36 - testbed-volume-0-node-base 2026-01-05 03:26:36.496192 | orchestrator | 2026-01-05 03:26:36 - testbed-volume-4-node-base 2026-01-05 03:26:36.535988 | orchestrator | 2026-01-05 03:26:36 - testbed-volume-3-node-base 2026-01-05 03:26:36.574772 | orchestrator | 2026-01-05 03:26:36 - testbed-volume-manager-base 2026-01-05 03:26:36.617521 | orchestrator | 2026-01-05 03:26:36 - testbed-volume-1-node-4 2026-01-05 03:26:36.662777 | orchestrator | 2026-01-05 03:26:36 - testbed-volume-3-node-3 2026-01-05 03:26:36.705315 | orchestrator | 2026-01-05 03:26:36 - testbed-volume-6-node-3 2026-01-05 03:26:36.743982 | orchestrator | 2026-01-05 03:26:36 - testbed-volume-7-node-4 2026-01-05 03:26:36.788101 | orchestrator | 2026-01-05 03:26:36 - testbed-volume-0-node-3 2026-01-05 03:26:36.830167 | orchestrator | 2026-01-05 03:26:36 - testbed-volume-2-node-5 2026-01-05 03:26:36.871422 | orchestrator | 2026-01-05 03:26:36 - testbed-volume-4-node-4 2026-01-05 03:26:36.915629 | orchestrator | 2026-01-05 03:26:36 - testbed-volume-8-node-5 2026-01-05 03:26:36.957948 | orchestrator | 2026-01-05 03:26:36 - testbed-volume-5-node-5 2026-01-05 03:26:37.001726 | orchestrator | 2026-01-05 03:26:37 - disconnect routers 2026-01-05 03:26:37.115869 | orchestrator | 2026-01-05 03:26:37 - testbed 2026-01-05 03:26:38.097275 | orchestrator | 2026-01-05 03:26:38 - clean up subnets 2026-01-05 03:26:38.157112 | orchestrator | 2026-01-05 03:26:38 - subnet-testbed-management 2026-01-05 03:26:38.331274 | orchestrator | 2026-01-05 03:26:38 - clean up networks 2026-01-05 03:26:38.516101 | orchestrator | 2026-01-05 03:26:38 - net-testbed-management 2026-01-05 03:26:39.292020 | orchestrator | 2026-01-05 03:26:39 - clean up security groups 2026-01-05 03:26:39.336131 | orchestrator | 2026-01-05 03:26:39 - testbed-node 2026-01-05 03:26:39.454892 | orchestrator | 2026-01-05 03:26:39 - testbed-management 2026-01-05 03:26:39.585160 | orchestrator | 2026-01-05 03:26:39 - clean up floating ips 2026-01-05 03:26:39.619389 | orchestrator | 2026-01-05 03:26:39 - 81.163.193.95 2026-01-05 03:26:39.965693 | orchestrator | 2026-01-05 03:26:39 - clean up routers 2026-01-05 03:26:40.081971 | orchestrator | 2026-01-05 03:26:40 - testbed 2026-01-05 03:26:41.356359 | orchestrator | ok: Runtime: 0:00:19.791553 2026-01-05 03:26:41.359015 | 2026-01-05 03:26:41.359136 | PLAY RECAP 2026-01-05 03:26:41.359244 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-01-05 03:26:41.359433 | 2026-01-05 03:26:41.535920 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-01-05 03:26:41.539007 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-01-05 03:26:42.360298 | 2026-01-05 03:26:42.360494 | PLAY [Cleanup play] 2026-01-05 03:26:42.378540 | 2026-01-05 03:26:42.378719 | TASK [Set cloud fact (Zuul deployment)] 2026-01-05 03:26:42.429905 | orchestrator | ok 2026-01-05 03:26:42.437041 | 2026-01-05 03:26:42.437240 | TASK [Set cloud fact (local deployment)] 2026-01-05 03:26:42.471201 | orchestrator | skipping: Conditional result was False 2026-01-05 03:26:42.481519 | 2026-01-05 03:26:42.481650 | TASK [Clean the cloud environment] 2026-01-05 03:26:43.710228 | orchestrator | 2026-01-05 03:26:43 - clean up servers 2026-01-05 03:26:44.192637 | orchestrator | 2026-01-05 03:26:44 - clean up keypairs 2026-01-05 03:26:44.210898 | orchestrator | 2026-01-05 03:26:44 - wait for servers to be gone 2026-01-05 03:26:44.257116 | orchestrator | 2026-01-05 03:26:44 - clean up ports 2026-01-05 03:26:44.331796 | orchestrator | 2026-01-05 03:26:44 - clean up volumes 2026-01-05 03:26:44.398500 | orchestrator | 2026-01-05 03:26:44 - disconnect routers 2026-01-05 03:26:44.439333 | orchestrator | 2026-01-05 03:26:44 - clean up subnets 2026-01-05 03:26:44.470858 | orchestrator | 2026-01-05 03:26:44 - clean up networks 2026-01-05 03:26:44.655461 | orchestrator | 2026-01-05 03:26:44 - clean up security groups 2026-01-05 03:26:44.688813 | orchestrator | 2026-01-05 03:26:44 - clean up floating ips 2026-01-05 03:26:44.713116 | orchestrator | 2026-01-05 03:26:44 - clean up routers 2026-01-05 03:26:45.020459 | orchestrator | ok: Runtime: 0:00:01.437061 2026-01-05 03:26:45.023340 | 2026-01-05 03:26:45.023456 | PLAY RECAP 2026-01-05 03:26:45.023536 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-01-05 03:26:45.023574 | 2026-01-05 03:26:45.165829 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-01-05 03:26:45.169684 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-01-05 03:26:45.978928 | 2026-01-05 03:26:45.979127 | PLAY [Base post-fetch] 2026-01-05 03:26:45.996426 | 2026-01-05 03:26:45.996580 | TASK [fetch-output : Set log path for multiple nodes] 2026-01-05 03:26:46.063366 | orchestrator | skipping: Conditional result was False 2026-01-05 03:26:46.077593 | 2026-01-05 03:26:46.077901 | TASK [fetch-output : Set log path for single node] 2026-01-05 03:26:46.139322 | orchestrator | ok 2026-01-05 03:26:46.149026 | 2026-01-05 03:26:46.149188 | LOOP [fetch-output : Ensure local output dirs] 2026-01-05 03:26:46.652805 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/a54607074d3f4bc6b3302cee85a7e89a/work/logs" 2026-01-05 03:26:46.929301 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/a54607074d3f4bc6b3302cee85a7e89a/work/artifacts" 2026-01-05 03:26:47.246180 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/a54607074d3f4bc6b3302cee85a7e89a/work/docs" 2026-01-05 03:26:47.274005 | 2026-01-05 03:26:47.274202 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-01-05 03:26:48.262826 | orchestrator | changed: .d..t...... ./ 2026-01-05 03:26:48.263252 | orchestrator | changed: All items complete 2026-01-05 03:26:48.263301 | 2026-01-05 03:26:49.041020 | orchestrator | changed: .d..t...... ./ 2026-01-05 03:26:49.782855 | orchestrator | changed: .d..t...... ./ 2026-01-05 03:26:49.813112 | 2026-01-05 03:26:49.813272 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-01-05 03:26:49.851158 | orchestrator | skipping: Conditional result was False 2026-01-05 03:26:49.854205 | orchestrator | skipping: Conditional result was False 2026-01-05 03:26:49.870756 | 2026-01-05 03:26:49.871211 | PLAY RECAP 2026-01-05 03:26:49.871284 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-01-05 03:26:49.871312 | 2026-01-05 03:26:50.028596 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-01-05 03:26:50.031288 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-01-05 03:26:50.847816 | 2026-01-05 03:26:50.848012 | PLAY [Base post] 2026-01-05 03:26:50.863157 | 2026-01-05 03:26:50.863324 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-01-05 03:26:51.910581 | orchestrator | changed 2026-01-05 03:26:51.918953 | 2026-01-05 03:26:51.919083 | PLAY RECAP 2026-01-05 03:26:51.919148 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-01-05 03:26:51.919213 | 2026-01-05 03:26:52.053476 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-01-05 03:26:52.056110 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-01-05 03:26:52.893210 | 2026-01-05 03:26:52.893383 | PLAY [Base post-logs] 2026-01-05 03:26:52.904547 | 2026-01-05 03:26:52.904697 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-01-05 03:26:53.452442 | localhost | changed 2026-01-05 03:26:53.469185 | 2026-01-05 03:26:53.469367 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-01-05 03:26:53.496855 | localhost | ok 2026-01-05 03:26:53.501842 | 2026-01-05 03:26:53.502076 | TASK [Set zuul-log-path fact] 2026-01-05 03:26:53.530960 | localhost | ok 2026-01-05 03:26:53.548170 | 2026-01-05 03:26:53.548329 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-01-05 03:26:53.587392 | localhost | ok 2026-01-05 03:26:53.593167 | 2026-01-05 03:26:53.593318 | TASK [upload-logs : Create log directories] 2026-01-05 03:26:54.180050 | localhost | changed 2026-01-05 03:26:54.185137 | 2026-01-05 03:26:54.185330 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-01-05 03:26:54.743754 | localhost -> localhost | ok: Runtime: 0:00:00.012642 2026-01-05 03:26:54.750687 | 2026-01-05 03:26:54.750878 | TASK [upload-logs : Upload logs to log server] 2026-01-05 03:26:55.350114 | localhost | Output suppressed because no_log was given 2026-01-05 03:26:55.353498 | 2026-01-05 03:26:55.353667 | LOOP [upload-logs : Compress console log and json output] 2026-01-05 03:26:55.437458 | localhost | skipping: Conditional result was False 2026-01-05 03:26:55.448294 | localhost | skipping: Conditional result was False 2026-01-05 03:26:55.452455 | 2026-01-05 03:26:55.452595 | LOOP [upload-logs : Upload compressed console log and json output] 2026-01-05 03:26:55.516837 | localhost | skipping: Conditional result was False 2026-01-05 03:26:55.517680 | 2026-01-05 03:26:55.520796 | localhost | skipping: Conditional result was False 2026-01-05 03:26:55.527348 | 2026-01-05 03:26:55.527517 | LOOP [upload-logs : Upload console log and json output]